- Electrical and Electronic Engineering - Theses
Electrical and Electronic Engineering - Theses
Permanent URI for this collection
Search Results
Now showing
1 - 10 of 213
-
ItemAutomated Assessment of Motor Functions in Stroke using Wearable SensorsDatta, Shreyasi ( 2022)Driven by the aging population and an increase in chronic diseases worldwide, continuous monitoring of human activities and vital signs have become a major focus of research. This has been facilitated by the advent of wearable devices equipped with miniaturized sensors. Compared to bench-top devices in hospitals and laboratories, wearable devices are popular in improving health outcomes, because of their compact form factors and unobtrusive nature. Stroke, a neurological disorder, is a major concern among all chronic diseases because it causes high rates of death and disability globally every year. Motor deterioration is the most common effect of stroke, leading to one-sided weakness (i.e., hemiparesis), and limiting movements and coordination. Stroke survivors require regular assessments of motor functionality during the acute, sub-acute and chronic phases of recovery, leading to dependence on human intervention and massive expenditures on patient monitoring. Therefore, an automated system for detecting and scoring hemiparesis, independent of continuous specialized medical attention, is necessary. This thesis develops various methods to objectively quantify motor deterioration related to stroke using wearable motion sensors, for automated assessment of hemiparesis. In the first part of the thesis, we use accelerometer data acquired from wrist-worn devices to analyze upper limb movements and identify the presence and severity of hemiparesis in acute stroke, during a set of spontaneous and instructed tasks. We propose measures of time (and frequency) domain coherence between accelerometry-based activity measures from two arms, that correlate with the clinical gold standard National Institutes of Health Stroke Scale (NIHSS). This approach can accurately distinguish between healthy controls, mild-to-moderate and severe hemiparesis through supervised pattern recognition, using a hierarchical classification architecture. We propose additional descriptors of bimanual activity asymmetry, that characterize the distribution of acceleration-derived activity surrogates based on gross and temporal variability, through a novel bivariate Poincare analysis method. This leads to achieving further granularity and sensitivity in hemiparesis classification into four classes, i.e., control, mild, moderate and severe hemiparesis. The second part of the thesis analyzes the quality of spontaneous upper limb motion captured using wearable accelerometry. Here, velocity time series estimated from the acquired data is decomposed into movement elements, which are smoother and sparser in the normal hand than the paretic hand, and the amount of smoothness correlates with hemiparetic severity. Using statistical features characterizing their bimanual disparity, this method can classify mild-to-moderate and severe hemiparesis with high accuracy. Compared to the activity-based features, this method is more interpretable in terms of joint biomechanics and movement planning, and is robust to the presence of noise in the acquired data. In the third part of the thesis, we propose unsupervised methods for bimanual asymmetry visualization in hemiparesis assessment, using motion templates representative of well-defined instructed tasks. These methods are aimed at creating models for assessing the qualitative progression of motor deterioration over time instead of single-point measurements, or when class labels representing clinical severity are not available. We propose variants of the Visual Assessment of (cluster) Tendency (VAT) algorithm, to study cluster evolution through heat maps, by representing instructed task patterns through local timeseries characteristics, known as shapelets. These shapelets transform high dimensional sensor data into low-dimensional feature vectors for VAT evaluation. We show the significance of these methods for efficient and interpretable cluster tendency assessment for anomaly detection and continuous motion monitoring, applicable not only to hemiparesis assessment, but also in identifying motor functionality in other neurological disorders or activity recognition problems. Finally, in the fourth part of the thesis, we show applications of the above methods to objectively measure gait asymmetry in stroke survivors, using lower limb position data from wearable infrared markers and camera-based motion capture devices. These methods can efficiently quantify the severity of lower limb hemiparesis, thereby being suitable for automated gait monitoring during extended training and rehabilitation in the chronic phase of recovery.
-
ItemThermo-mechanical energy storage applications for energy system decarbonisationVecchi, Andrea ( 2022)This research explores the prospective application of thermo-mechanical energy storage technologies for energy system decarbonisation. It characterises, first, the techno-economic performance of one such technologies, Liquid Air Energy Storage (LAES), when operated within the power system to supply energy and reserve services. Then, Liquid Air Energy Storage operation as a multi-energy asset is studied. To conclude, the potential of six between established and novel thermo-mechanical energy storage concepts is cross-compared and benchmarked with incumbent storage technologies for long-duration energy storage applications.
-
ItemScenario Based Optimization over Uncertain System Identification ModelsWang, Xiaopuwen ( 2022)A model describes the relationship between inputs and outputs of a system and offers an explanation of the system behaviour. Mathematical models of dynamical systems are widely used in many field of science and engineering. There are uncertainties associated with the models, and it is important to quantify these uncertainties. System identification can be used to obtain models of dynamical system. A typical system identification approach is to select a parameterized model class and estimate the unknown parameters from observed data. This includes method such as least squares, prediction error methods and instrumental variable methods that give a point estimate for the unknown parameters. There is always an error between the estimate and the true model parameter when the observed data are noisy. This error represents the model uncertainty. Another way to describe the model uncertainty is to use confidence regions. A confidence region contains a set of plausible values for the unknown true parameter and contains the true parameter with a certain probability. There are many ways to construct confidence regions. By using asymptotic system identification theory and assuming the number of data points goes to infinity, confidence regions that contains the true model parameter with a certainty probability can be found. In recent years, new methods such as Leave-out Sign-dominant Correlation Regions (LSCR) and Sign-Perturbed Sums (SPS) have also been developed which provide confidence regions when only a finite number of data points are available. In optimization based design problem, a cost function reflecting the design target is minimized. The found decision variable and minimized cost will depend on the model parameter, which is unknown but can be described by the confidence region. One approach is to apply a robust design, which minimizes the worst case value of the cost function over a set of unknown parameters. This problem can be difficult to solve in some cases. Therefore a computationally tractable method called scenario approach is used in this thesis. In the scenario approach, samples are drawn from the model uncertainty set and an optimization problem is then solved based on the drawn samples. In this thesis approaches that combine system identification methods and the scenario approach for different data generating systems are investigated. They are considered in three different settings. In the Bayesian framework, a Bayesian approach to system identification is considered and the system parameters are viewed as a realization of a random vector. Using the observed data, the posterior density of the system parameters can be computed and used as the model uncertainty set from which samples are drawn from to be used in scenario approach. Algorithms are provided to obtain approximately i.i.d samples from the posterior distribution. In the non-Bayesian framework, a model is obtained via system identification and the uncertainty associated with the model is characterized by the distribution of estimation error. By knowing this estimation error, samples can be built based on this estimation error. Algorithms are provided for obtaining the samples and theoretical results are also given. In the Sign-Perturbed Sums framework, we use SPS method to find the confidence region and this confidence region is considered as the model uncertainty. By knowing whether a given point belongs to the SPS confidence region, algorithms for combining SPS and scenario approach are designed.
-
ItemUniformly Bounded State Estimation over Multiple Access ChannelsZafzouf, Ghassen ( 2022)In this doctoral thesis, a characterization of the zero-error capacity region for three different classes of multiple access channels (MACs) is derived. The first type of channels considered in this work is a two-user MAC with a common message that captures the correlation between transmitters. Next, this model is extended by considering an arbitrary number of users M >= 2. The last class of MACs represents a further extension to a more general case where inter-user correlation is modeled by a common message seen by all users as well as pairwise shared messages. In this research, we look at the zero-error capacity, which differs from the more commonly studied small-error capacity, from a nonprobabilistic angle. In fact, the obtained characterization is based on the so-called nonstochastic information, and is valid not only for asymptotically large coding block-lengths but also for finite lengths. Understanding how to coordinate unambiguous communication through MACs, such that several unrelated senders can simultaneously send as much information as possible is of great interest, especially with the emergence of new paradigms such as the Internet of Things (IoT) and Machine-to-Machine (M2M) communication. Next, using the characterization of the zero-error capacity region for the two-user MAC, we investigate the problem of distributed state estimation under the criterion of uniformly bounded estimation errors. It is shown that if there exists a coder-estimator tuple that achieves the desired criterion, namely uniformly bounded estimation error, the vector of topological entropies of the linear systems, whose state is being estimated, must lie within the zero-error capacity region of the communication channel. Additionally, we prove that if the to-be-observed plants have a topological entropy vector inside the interior of the zero-error capacity region, the existence of a coder-estimator tuple achieving uniformly bounded state estimation errors is guaranteed. This result relates the channel properties to the plant dynamics and paves the way toward understanding information flows in networked control systems with multiple transmitters. Finally, we seek to characterize the fundamental tradeoff between the communication data rate, code-length, system dynamics and state estimation performance. To this end, a universal lower bound on the time-asymptotic estimation error is obtained using volume-based analysis. Additionally, to provide a guarantee on the estimation performance, an upper bound on the error is derived when the measurements are quantized. When the code-length is large, we show that these lower and upper bounds converge to the same limit.
-
ItemArchitecture and Policy Design for Next-generation Access NetworksRoy, Dibbendu ( 2022)With more than twenty years down the twenty-first century, communication networks are undergoing a paradigm shift. Due to the increase in available computing power, computing-intensive applications in form of augmented/virtual reality, internet of vehicles, remote automation, etc., have emerged in addition to the traditional voice, video, and data. The increasing role of computing in executing applications over networks led to the emergence of cloud, and subsequently edge and fog computing. Next-generation networks are envisioned to be application-driven and designed to satisfy end-to-end (E2E) quality of service (QoS) and quality of experience (QoE) requirements of the applications, considering both network and computing paradigms. This thesis focuses on achieving the aforementioned goals at the access part of the network that connects end-users to the service providers. The access segment of the network experiences significant dynamic behavior as compared to its core counterpart due to the independent and random nature of users and customers. The thesis investigates two popular access network technologies: Passive Optical Networks (PON) and Radio Access Networks (RAN), in the context of fog and edge computing. While PON is a wired access network, RAN connects to users wirelessly with a wired counterpart from radio stations to the edge/core servers (also known as backhaul). It is desired that fog/edge nodes be integrated with PON in a cost-effective and seamless manner, without altering the protocols in place. In addition, it is important to investigate and design dynamic bandwidth allocation (DBA) policies that can satisfy the strict QoS requirements. Chapter 3 of this thesis, demonstrates how to design a cost-effective fog-integrated architecture for PON. It also delineates a dynamic bandwidth allocation protocol that enables the communication between fog node and users without significantly changing the existing DBA of PON. In Chapter 4, the problem of satisfying strict QoS requirements is solved using the Model Predictive Control (MPC) technique. For this, an innovative delay tracking mechanism using virtual queues is developed, allowing one to take far-sighted decisions in contrast to short-sighted ones that is commonly employed in the literature. In RAN, it is envisioned that future networks be zero-touch, implying that the network is able to intelligently automate its policies according to demands, thus significantly reducing human intervention. Orchestrators such as software defined network (SDN) controller (for networks) and Kubernetes (for servers) play an important role as key enablers of a zero-touch implementation. To meet the different QoE requirements of new applications, networking and computing resources are virtualized and sliced-up for each application type, also known as network slicing. To achieve E2E QoE, the two orchestrators should work jointly and create slices of networking and computing resources to satisfy E2E QoE. In addition, they must establish the relationships between E2E QoE and resources so that resource requirements are decided for both deterministic and dynamically changing environments in an automated manner. Chapter 5 of this thesis develops sequential distributed learning and optimization models to learn the relationships under static and dynamic conditions and take robust slicing decisions to achieve E2E QoE at the backhaul of RAN. The learning process requires the incorporation of artificial intelligence (AI) in the slicing process which is a crucial step towards zero-touch network design. To summarize, this thesis demonstrates how next-generation access network architectures involving fog/edge computing are designed, operated and maintained in an automated and seamless manner.
-
ItemIndoor Optical Wireless Communications Employing Beam Shaping TechniquesLi, Jianghao ( 2022)The explosion in the volume of information exchange within our personal spaces demands new and improved ways of communicating between devices and machines as well as humans. With the emergence of new ways of user interaction such as virtual reality (VR) and augmented reality (AR) as well as new applications such as remote tele-surgery and smart home, demand for short-haul and high capacity wireless communications technologies will rise sharply. In addition, such technologies will be expected to offer high-speed, ultra-wideband and low latency wireless connections and scalable networking. Achieving these objectives via the current radio frequency (RF) based wireless communication technologies such as Wi-Fi, ultra-wideband (UWB) and millimeter-wave communications will face challenges due to congestion of the spectrum. Optical wireless communications (OWC) have emerged as another promising alternative to support ultra-high-speed and data-intensive wireless connections and networks for indoor applications in the foreseeable future due to its unique advantages such as scalable unregulated bandwidth, high security and flexible networking solutions. OWC systems will require optical sources such as lasers which are used to generate steerable optical wireless beams to establish coverage zones. These laser beams do exhibit specific intensity profiles across their cross-sections. To realize high performance OWC with stable performance for all users regardless of their location within the coverage area of these beams, optical beams would require beam shaping to equalize the intensity variation. In this thesis, a comprehensive investigation of beam shaping techniques to further improve the performance of current OWC systems including the effective coverage and the robustness to the pointing errors and link blockages is presented. Few-mode propagation in fibers and orbital angular momentum (OAM) mode propagation in free-space were exploited to propose specific beam shaping methods. These were then demonstrated in proof of concept experiments and investigated using analytical and numerical models. Specific receiver technologies and configurations were used to evaluate high capacity OWC downlink transmission and their performance were investigated. A multi-user OWC system using few-mode based beam shaping with time-slot coding (TSC) has been proposed and numerically conducted. The effective coverage, tolerance to signal overlapping and delay interval can be enhanced by employing few-mode based beam shaping. The impacts of the power ratio between different transmitted modes have been also thoroughly evaluated via the simulation. Furthermore, a novel dual-single-sideband multi-user system using few-mode based beam shaping is deployed to further improve the system spectral efficiency. Adaptive equalization methods including least mean square (LMS) and recursive least-square (RLS) are theoretically investigated and experimentally compared to improve the performance of indoor OWC systems employing few-mode based uniform beam shaping. For a reference BER threshold of KP4 FEC limit, the LMS based single-equalizer scheme with a step size of 0.0035 and a tap number of 61 has the lowest computational complexity. Repetition-coding (RC) and Alamouti-type orthogonal space-time-block-coding (STBC) have been investigated as transmitter diversity techniques for indoor OWC employing OAM mode-based beam shaping with different power ratios of the modes. Besides, the experimental demonstrations and simulations of OAM mode-based beam shaping for indoor OWC systems have been conducted for proof-of-concept. The results show that the effective coverage can be expanded significantly by employing OAM mode-based beam shaping compared to conventional OWC systems without beam shaping and the optimal performance for a symbol rate of 10 Gbaud can be obtained when the power ratio of OAM0/OAM1 and OAM0/OAM2 is 0.33 and 0.38, respectively. Furthermore, the BER performance distribution in different directions within the optical cell has also been evaluated.
-
ItemSimulation and Fabrication of Printable/Solution Processed Inorganic Quantum Dot LEDsYu, Yang ( 2022)Modern integrated circuit (IC) technology is silicon-based, and its manufacturing technologies have evolved into an extremely complicated process. The state-of-the-art complementary metal oxide semiconductor (CMOS) manufacturing technologies can fabricate high performance ICs, consisting of billions of transistors. The rapid development of electronics industry facilitates the emergence of novel electronic devices, such as flexible displays, visual/augment reality (VR / AR) glasses, foldable mobile phones and wearable devices. Coming alongside the exceptional performance of electronic devices, however, the conventional IC manufacturing is becoming highly investment-intensive and time-consuming. It costs billions of dollars to build a fab, where more than 60% of the cost contributes to the expensive facilities, including clean room, ultraviolet (UV) lithography systems and extremely high vacuum environment. The capital and technical barriers make semiconductor industry a game only for tech giants, greatly hindering the innovations from small- and medium-sized companies. For the past decade, novel solution process or printable electronics has shown tremendous potential in the manufacturing low-cost electronic devices. Compared to the conventional manufacturing methods, solution process is cheaper, easier, and eco-friendly. The superior compatibility with functional materials and various substrates makes solution process a great candidate for the fabrication of novel light emitting diode (LED) displays. In this thesis, we focused on the realization of low-cost transparent flexible LEDs using novel emissive material quantum dots (QDs). The electrical and chemical properties of the synthesized functional inks were well-studied. We also developed a deep understanding of solution process and modern printing techniques. By using this knowledge, transparent quantum dots LED (QLED) was fabricated, and it shows great potential in applications of next generation displays. The aim of this research is to design and fabricate electronic components for the application of next generation flexible displays. To achieve this goal, QLEDs was designed and fabricated to demonstrate the light emitting capability as a high performance transparent light source. Besides, LED driving circuitry is required to demonstrate the addressing and control capabilities for LED devices. Thus, the design and fabrication of thin film transistors (TFTs) were investigated for the realization of solution-processed LED driving circuits. In this case, the design architecture or development process of the QLED circuits can be divided into two parts, where one part focuses on the realization of QLED while the other part concentrates on the development of printable TFTs. Detailed architecture or structure of this research is elaborated as follows. In the first part of the thesis, we will be studying the properties of functional inks used for the fabrication of thin film transistors, QLED as well as the synthesis procedures. Various characterizations were taken to investigate the structure, surface morphology of different materials. In addition, the working principle of solution process and several printing methods were studied. By fabricating a prototype transistor, the ink formulation was investigated and optimized to be compatible with the inkjet printing process. Next, transparent QLED device using solution-processed metal oxide carrier transport layers was fabricated. The carrier transport layer materials were derived by low-temperature sol-gel combustion process with post-annealing temperature below 275 Celsius degree. The introduction of copper doping into nickel oxide (NiO) as interlayer further improved the hole injection efficiency, making the QLED perform better. The QLED turn-on voltage has been reduced from 4.5 to 2.5 V, which makes the QLED compatible with CMOS electronics. While preserving good semiconducting performance, the as-derived thin films show good quality and high optical transparency, which facilitate the application of transparent electronics. After that, the modelling and simulation of QLED will be discussed. A theoretical framework is developed and provides insights about how each design choice and parameter affects critical QLED attributes of the fabricated QLEDs. The simulation results showed good accordance with the experimental results and theoretical analysis, making it possible to further optimize the structure and materials used in QLED fabrication. Lastly, the simulation of QDs photodiode were presented, which demonstrated the feasibility of QDs based solution processed optoelectronic devices. Based on the present experimental results, we believe that the solution-processed or printed electronics can be a game changer for the rapidly developing microelectronic market.
-
ItemAdvanced Neural Network-Based Equalization for Short-Reach Direct Detection SystemsXu, Zhaopeng ( 2022)Driven by the exponential growth of Internet traffic mostly from cloud and mobile services in recent years, there is an increasing demand for high-speed low-cost optical communication systems in short-reach applications such as data center interconnects. Compared with coherent detection, direct detection optical links are well-suited for such applications due to its low cost and simple structure. However, the intensity-only direct detection, or the simple square-law detection of optical field, produces a nonlinear channel when mixed with chromatic dispersion. Moreover, to meet the low-cost target, bandwidth-limited transceivers and cheap lasers such as directly modulated lasers (DML) are preferred which possess non-ideal frequency response and the chirp impairments. The mixed linear and nonlinear impairments could strongly degrade bit error rate (BER) performance and limit the system achievable capacity. As such, efficient nonlinear equalization techniques are of vital importance to guarantee a desired system BER performance. With the rapid development of machine learning technologies, various neural network (NN)-based equalizers have been proposed recently as the underlying digital signal processing (DSP) tools to effectively deal with the system impairments. NN-based equalizers attract a lot of attention since they usually outperform traditional methods such as feedforward equalization (FFE), decision feedback equalization (DFE) and the Volterra series-based equalization in BER performance, which enables higher data-rate signal transmission. Besides system BER performance, another important concern lies in the computational complexity (CC) of the receiver. For NN-based equalization, the CC concern lies in both the training and equalization processes. The training process of NNs usually requires a large number of training symbols and epochs, and when the link scenario is changed, the performance of the old NNs will degrade and the NNs may need to be retrained to fit for the new scenario, which is rather computationally inefficient. As for the equalization process, the number of multiplications per equalized symbol can only be around a few tens considering real-time DSP implementation. The increase of CC would lead to higher latency and larger power consumption of the receiver. Based on these important facts, it is highly desirable to reduce CC, whether in NN training or the equalization process. In this thesis, the performance and CC of NN-based equalizers are the main concerns for short-reach direct detection links. The CC of four commonly-used NN-based equalizers, i.e., feedforward NN (FNN), radial basis function NN (RBF-NN), auto-regressive recurrent NN (AR-RNN), and layer recurrent NN (L-RNN), are theoretically derived and their BER performance are compared in numerical simulation of a pulse amplitude modulation (PAM)4 direct detection optical link. FNN-based equalizers are found to have the lowest CC while the AR-RNN-based equalizers exhibit the best BER performance. Guidelines are provided on a proper NN selection considering the tradeoff between BER and CC. This thesis then focuses on performance-enhanced NN-based equalizers, and some advanced NN designs are investigated. A novel cascade FNN/RNN is developed and demonstrated in a 100-Gb/s PAM4 transmission experiment, which shows its superior performance over traditional approaches with limited additional CC involved. Different equalization schemes are then compared with the aim to jointly equalize both linear and nonlinear impairments. Besides cascade NNs, inserting an FFE/DFE block after NNs could also slightly improve system performance. Moreover, a thorough analysis and discussion are made on the BER and CC impact of all the possible additional connections which can be added onto a 2-layer FNN. Among all the possible connections, the cascade and recurrent ones are found most useful in determining equalization performance. Lastly, this thesis discusses several approaches to reduce CC for NN-based equalization. For the NN training part, transfer learning (TL) is applied in short reach applications, and the number of training symbols and epochs for equalization in the target system can be greatly decreased with the help of the trained NNs from source systems (systems different from target one but somewhat related). Instead of trained from scratch, the NNs are trained with preserved information gained from the source system, which enables an expeditious transition from one optical link to the other. For the NN equalization part, the idea of multi-symbol prediction is proposed for different types of NNs, which enables effective weight sharing and lower the number of multiplications required per received symbol. For FNN and cascade FNN, the CC reduction can be achieved without the loss of BER, while for RNN, multi-symbol equalization sacrifices slight BER performance. Nevertheless, if the BER is not strictly required, the CC reduction of RNN is still significant. Moreover, pruning technique is employed in NN-based equalization for short-reach links to reduce CC. With the aid of weight pruning, a sparsely-connected cascade RNN is demonstrated for equalization in a 100-Gb/s PAM4 link, while upholding the BER achieved by the fully-connected version. Pruning is also combined with multi-symbol equalization schemes, which further reduce the overall CC.
-
ItemCognition related electrophysiological and neuro-optical activity and dynamics in the CA1 region of the hippocampusSun, Dechuan ( 2022)Cognitive dysfunction characterises a wide range of neurological disorders including schizophrenia and Alzheimer's disease. Millions of people globally are affected by cognitive disorders annually. To date, hundreds of billions of funding have been invested in pathogenesis research and the development of new effective treatments. Unfortunately, the pathogenic mechanisms of cognitive dysfunction in these disorders remain obscure, and there remain no effective treatments. Clinical studies have found varying degrees of hippocampal dysfunctions or impairment with these disorders. The hippocampus is a major brain structure that plays important roles in information processing such as memory, navigation, and decision making. However, the multi-neuronal dynamics and modulation mechanisms during these events remain unclear, especially the function of neuronal ensembles. Thus, it is important to enhance our understanding of the fundamental mechanisms involved in hippocampal information processing, to pave the way for the design of treatments of cognitive disorders in the future. This thesis presents investigations of i) electrophysiological effects of ensemble neuronal behaviour with a category of cognitive disorder treatment drugs and potassium channel (Kv) modulators; (ii) study hippocampal multi-neuronal dynamics in a commonly used Alzheimer's disease drug model; (iii) explore the mechanisms underlying hippocampal multi-sensory information processing; (iv) reconstruct calcium signal ensemble recordings in “real-time” for potential use in a brain-computer interface. Antipsychotic drugs (APDs) are commonly used for psychotic illness, but most studies have focused on their dopaminergic effects. Although the treatment mechanism is not clear, it has been hypothesized that voltage-gated potassium channels may play a significant role. In chapter 3, both APDs and different voltage-gated potassium channel modulators (KVMs) effects on intracranial brain wave patterns were tested in mice. Local field potentials (LFPs) that reflect the activity of many neurons proximate to the recording electrode were recorded in the hippocampus and prefrontal cortex to characterise the local signal oscillations (spectra power and cross-frequency coupling) and inter-region signal communication (coherence). Given the overlap of effects observed between APDs and KVMs, the mechanisms by which APDs affect the properties of LFPs may be in some part due to the potassium channel modulation. Previous studies usually implement multi-electrode arrays to record the activity of neuronal ensembles, but this technique is restricted to a limited number of recording channels and has poor spatial resolution. In this project, a miniaturised fluorescence microscope (miniscope) was used to study hippocampal neural activity, which could provide widefield imaging and observe hundreds of neurons simultaneously in freely behaving animals over prolonged periods of time. Chapters 4-6 took the advantage of this technique and studied the properties of hippocampal neuronal ensembles. Scopolamine is a commonly used pharmacological model in Alzheimer's disease research. It mimics the memory impairment seen in AD, which is likely related to the blockade of muscarinic acetylcholine receptors (mAChRs). It has been found to impair memory encoding, but the effect on memory retrieval is controversial. In chapter 4, the activity of hippocampal place cells was recorded using a linear track. Scopolamine severely impaired the sensitivity and stability of place cells, leading to enhanced location reconstruction error. This effect was likely related to muscarinic modulation on M-type potassium channels, and these results were reproduced in a simulated hippocampal neural network by varying M-channel conductance. Hippocampal CA1 neurons are well known for their place sensitivity. However, the degree to which they process non-spatial information, especially in non-task related experiments has been a matter of dispute. In chapter 5, results of experiments in which mice were passively exposed to visual stimuli or auditory stimuli or a combination of these were recorded. There were about 10%-20% sensitive neurons showing distinct modality related firing patterns in different experiments. Additionally, the neural signals could be decoded to identify the stimuli with a decoding accuracy of about 70%-75% in visual and auditory stimuli experiments and 60% in the mixed stimuli experiment. Although neural calcium signals can be accurately decoded, analysis pipelines are generally very time-consuming, making it difficult to deploy in real-time. In Chapter 6, a form of hippocampal brain-computer interface was constructed utilising raw calcium activity instead of deconvolved spike trains to decode brain activity in real-time. We separately tested this method in several experiments – (i) position reconstruction in a linear track; (ii) visual and auditory stimuli identification. High decoding accuracy was achieved in all experiments, suggesting an efficient way to reconstruct the hippocampal cognitive map in real-time. Together, the results of this thesis enhance our understanding of the multi-neuronal dynamics of hippocampal neural networks, providing insights that may provide novel treatments for cognitive disorders in the future.
-
ItemGranger Causality Under Quantized and Partially Observed MeasurementsAhmadi, Seyed Salman ( 2022)In this thesis, we investigate Granger causality assessment and preservation under quantized signals and partially observed measurements, including filtered, or noisy measurements. To do so, we first present a necessary and sufficient rank criterion for the equality of two conditional Gaussian distributions. We introduce a partial finite-order Markov assumption, and under this assumption, we find a characterization of Granger causality in terms of the rank of a matrix involving the covariances. We call this the causality matrix. We show that the smallest singular value of the causality matrix gives a lower bound on the distance between the two conditional Gaussian distributions appearing in the definition of Granger causality and yields a new measure of causality. Unlike most literature, this approach does not require system models to be explicitly identified and their parameters to be examined. To the best of our knowledge, this causality matrix and the measure of causality, which is different from previously introduced measures such as Geweke's and Kullback–Leibler divergence-based measures, have not been introduced in the literature. Furthermore, we introduce conditions to preserve and also assess causality under general quantization schemes. In the causality preservation case, we present conditions under which a variation of Wiener-Granger causality between quantized signals implies Granger causality between the underlying Gaussian signals. We also show that assessing causality between Gaussian signals through quantized data can be achieved by more relaxed conditions and assumptions. We introduce explicit conditions for non-uniform, uniform, and high-resolution quantizers. Apart from the assumed partial Markov order and joint Gaussianity, this approach does not require the parameters of a system model to be identified. No assumptions are made on the identifiability of the jointly Gaussian random processes through the quantized observations. To the best of our knowledge, this is the first work on causality investigation under quantization, or indeed under any nonlinear effect. Moreover, we consider two different setups of the effects of filtering on Granger causality. First, we do not make any assumptions on Markovanity of the Gaussian processes, and we assume all the past information is available. Under such situations, we present necessary and sufficient conditions to preserve Granger causality under causal and non-causal linear time-invariant filtering. These results are generalizations and improvements of previously presented results in the literature. We then consider the next setup of the problem where finite information about the processes is available, and a partial finite-order Markov property holds. We derive conditions under which causality can be reliably preserved and assessed from the second-order moments of filtered measurements. The filters may be noncausal or nonminimum-phase, as long as they are stable. To the best of our knowledge, the preservation of causality for such a general class of filters and its connection to causality assessment have not been considered before. We also investigate the effects of additive noise on causality and present conditions to preserve and assess causality. The additive noise signals are not required to be Gaussian or independent. For the first time, we introduce results for preservation and assessment of causality under general conditions.