Electrical and Electronic Engineering - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 20
  • Item
    Thumbnail Image
    A Bayesian signal processing framework for dual polarized weather radar
    Samarasekera, Senaka ( 2015)
    Current Weather radar algorithms for rain micro-physical parameter estimation do not make optimal use of the micro-physical models that govern the rain drop states. This leads to an increased uncertainty at higher radar resolutions, and the underline assumptions of these models can be inconsistent. In this thesis, I design and implement a non-linear filter that estimates the static micro-physical parameters of rain using dual polarised radar returns.I model the radar returns as electromagnetic backscattering from random anisotropic medium, and introduce a likelihood function that enables joint estimation of the rain micro-physical parameters. I then look at identifiability constraints of this model and well-condition it using the Bayesian framework. The filter takes the form of a Rao-Blackwellized sequential Monte -Carlo sampler. The filter convergence is achieved with the use of independent Metropolis-Hasting move kernels and progressive correction. Application of the filter on rain-storm data suggests it can give higher resolution estimates at increased precision.
  • Item
    Thumbnail Image
    Modeling fetal cardiac valve intervals and fetal-maternal interactions
    Marzbanrad, Faezeh ( 2015)
    Despite the advances in fetal healthcare, in Australia around 9-10 out of 1000 babies die in perinatal period, which is defined as starting from 22 weeks of pregnancy and extending to the first week after birth. This mortality rate is three to four times higher in some developing countries. Furthermore, false alarms produced by the current fetal surveillance technology impose unnecessary interventions, which involve additional costs and potential maternal and fetal risks. Therefore there is a critical need for more accurate fetal assessment methods for reliable identification of fetal risks. Fetal heart assessment is one of the main concerns in fetal healthcare and provides significant information about the fetal development and well-being. The aim of this research is to develop automated and accurate fetal heart assessment methods using noninvasive and less specialized techniques. In this research, automated methods were developed for estimation of the fetal cardiac valve intervals which are fundamental and clinically significant part of the fetal heart physiology. For this purpose simultaneous recordings of one dimensional Doppler Ultrasound (1-D DUS) signal and noninvasive fetal Electrocardiography (fECG) were used. New methods were developed for decomposition of the DUS signal into the component manifesting the valves' motion. Opening and closing of the valves were then identified automatically based on the features of the DUS component, their temporal order and duration from the R-peak of fECG. Result of evaluating the cardiac intervals over healthy gestational ages and in heart anomaly cases, showed evidences of their effectiveness in assessing fetal development and well-being. Fetal heart activity is influenced by not only the fetal conditions and maturation, but also the maternal psychological and physiological conditions. Therefore this research also focused on the relationship between maternal and fetal heart rates. To this aim, a model-free method based on Transfer Entropy (TE) was used to quantify directed interactions between maternal and fetal heart rates at various time delays and gestational ages. The changes of the coupling throughout gestation provided detailed information on the fetal-maternal relationship, which can provide novel clinical markers of healthy versus pathological fetal development.
  • Item
    Thumbnail Image
    Energy efficient wireless system design
    Kudavithana, Dinuka ( 2015)
    The demand for telecommunication networks is increasing rapidly. Wireless access is a major contributor to this trend. On the other hand, wireless is considered as a least energy efficient transmission medium mainly due to its unguided nature. The general focus of increasing wireless system energy efficiency is on reduction of the transmit power. However, this strategy may not save energy in short distance communication systems as the processing energy in hardware becomes more significant compared to the transmit radio energy. This thesis focuses on looking at the energy consumption of wireless systems by modeling the energy consumption as a function of several parameters such as receiver SNR, RF bandwidth, information rate, modulation scheme and code rate. We propose energy models for synchronization systems and other digital signal processing modules by considering the computational complexity of the algorithm and the required circuitry. Initially we focus on the synchronization aspects of wireless receivers. We study various algorithms on symbol timing recovery, carrier frequency recovery and carrier phase recovery and compare the performance in order to identify the suitable algorithms to operate at different SNR regions. We then develop energy models for those synchronization sub-systems by analyzing the computational complexity of circuitries based on a number of arithmetic, logic and memory operations. We define a new metric - energy consumption to achieve a given performance as a function of SNR - in order to compare the energy efficiency of different estimation algorithms. Next, we investigate the energy-efficiency trade-offs of a point-to-point wireless system by developing energy models of both the transmitter and receiver that include practical aspects such as error control coding, synchronization and channel equalization. In our system, a multipath Rayleigh-fading channel model and a low-density parity check (LDPC) coding scheme are chosen. We then develop a closed-form approximation for the total energy consumption as a function of receiver SNR and use it to find a minimum-energy transmission configuration. The results reveal that low SNR operation (i.e. low transmit power) is not always the most energy efficient strategy, especially in short distance communication. We present an optimal-SNR concept which can save a significant amount of energy mainly in short-range transmission systems. We then focus on cooperative relay systems. We investigate the energy efficiency trade-offs of single--relay networks by developing energy models for two relay strategies: amplify-and-forward (AF) and detect-and-forward (DF). We then optimize the location and power allocation of the relay to minimize the total energy consumption. The optimum location is found in two-dimensional space for constrained and unconstrained scenarios. We then optimize the total energy consumption over the spectral efficiency and derive expressions for the optimal spectral efficiency values. We use numerical simulations to verify our results. Finally, we focus on energy efficiency of multi-relay systems by considering a dual-relay cooperative system using DF protocol with full diversity. We propose a location-and-power-optimization approach for the relays to minimize the transmit radio energy. We then minimize the total system energy from spectral efficiency perspective for two scenarios: throughput-constrained and bandwidth-constrained configurations. Our proposed approach reduces the transmit energy consumption compared to an equal-power allocated and equidistant-located relay system. Finally, we present an optimal transmission scheme as a function of distance by considering single-hop and multi-hop schemes. The overall results imply that more relays are required as the transmission distance increases in order to maintain a higher energy efficiency.
  • Item
    Thumbnail Image
    Fundamental energy requirements of information processing and transmission
    Angley, Daniel Michael ( 2015)
    This thesis investigates fundamental limits on the energy required to process and transmit information. By combining physical laws, such as the second law of thermodynamics, with information theory, we present novel limits on the efficiency of systems that track objects, perform stochastic control, switch communication systems and communicate information. This approach yields results that apply regardless of how the system is constructed. While the energy required to perform an ideal measurement of a static state has no known lower bound, this thesis demonstrates that this is not true for noisy measurements or if the state is evolving stochastically. We derive new lower bounds on the energy required to perform such tracking tasks, including Kalman filtering. The goal of stochastic control is usually to reduce the entropy of the controlled system. This is also the task of a Maxwell demon, a thought experiment in which a device or being reduces the thermodynamic entropy of a closed system, violating the second law of thermodynamics. We demonstrate that the same arguments that `exorcise' Maxwell's demon can be used to find lower bounds on the energy consumption of stochastic controllers. We show that the configuration of a switching system in communications, that directs input signals to the desired outputs, can be used to store information. Reconfiguring the switch therefore erases information, and must have an energy cost of at least $k_B T \ln(2)$ per bit due to Landauer's principle. We then calculate lower bounds on the energy required to perform finite-time switching in a one-input, two-output MEMS (microelectromechanical system) mirror switch subject to Brownian motion, demonstrating that the shape of the potential that the switch is subject to affects both the steady-state noise and the energy required to change the configuration. Finally, by modifying Feynman's ratchet and pawl heat engine in order to perform communication instead of doing work, we investigate the efficiency of communication systems that operate solely using the temperature difference between two thermal reservoirs. The lower bound for the energy consumption of any communication system operating between two thermal reservoirs, with no channel noise and using equiprobable partitions of heat energy taken from these reservoirs, is found to be $\frac{T_H T_C}{T_H-T_C} k_B \ln(2)$, where $T_H$ and $T_C$ are the temperatures of the hot and cold reservoir, and $k_B$ is Boltzmann's constant.
  • Item
    Thumbnail Image
    Energy consumption of cloud computing and fog computing applications
    Jalali, Fatemeh ( 2015)
    A great deal of attention has been paid to the energy consumption of Cloud services and data centers in an endeavor to reduce the energy consumption and carbon footprint of the ICT industry. Since the data in Cloud services is processed and stored in data centers, an obvious focus for studying energy consumption of Cloud services is the data centers. However, the energy consumption of a Cloud service is not just due to data centers, it also includes energy consumption of the transport network that connects end-users to the Cloud and the energy consumption of end-user devices when accessing the Cloud. In most of previous studies on energy consumption of Cloud computing services, the energy consumed in the transport network and end-user devices has not taken into account. To show the importance of energy consumption of these ignored parts, the total energy consumed by three well-known Cloud applications, Facebook, Google Drive and Microsoft OneDrive, is studied using measurements and modeling. The results show that achieving an energy-efficient Cloud service requires improving the energy efficiency of the transport network and the end-user devices along with the related data centers. The popularity of hosting and distributing content and applications from small servers located in end-user premises (known as nano data centers) is increasing especially with the advent of Internet of Things (IoT) and the Fog Computing paradigm. In this work we study energy consumption of nano data centers since there are different views on the energy consumption of nano data centers. These differences stem from using different energy consumption models and ignoring energy consumed in the transport network. To fill the knowledge gap in this field, we propose established and measurement based models for network topology and energy consumption to identify parameters that make nano data centers more/less energy-efficient than centralized data centers. A number of findings emerge from this study, including the factors that enable nano data centers to consume less energy than its centralized counterpart, such as (a) type of access network attached to nano servers, (b) the ratio of nano server’s idle time to active time and, (c) type of applications which includes number of downloads, updates and data pre-loading. This study shows that nano data centers can complement centralized data centers and lead to energy savings for applications that are off-loadable from centralized data centers to nano data centers.
  • Item
    Thumbnail Image
    Crowd behavior analysis using video analytics
    Rao, Aravinda Sridhara ( 2015)
    Crowd analysis is a critical problem in understanding crowd behavior for surveillance applications. The current practice is manually scanning video feeds from several sources. Video analytics allows the automatic detection of events of interest, but it faces many challenges because of non-rigid crowd motions and occlusions. The algorithms developed for rigid objects are ineffectual in managing crowds. This study describes the optical flow-based video analytics for crowd analysis and applications include people counting, density estimation, event detection, and abnormal event detection. There are two main approaches to detecting objects in a video. First, the background modeling approach models the scene background. Modeled pixel values represent the scene, and each pixel value determines whether it belongs to the background or foreground. The second method provides motion information by estimating an object's motions. Articulated actions and sudden movements of people limit background modeling. Therefore, this thesis uses motion estimation to detect objects. Crowd density estimation is important for understanding crowd behavior. Optical flow features provide motion information on objects, and refining these features using spatial filters produce motion cues that signal the presence of people. Clustering the motion cues hierarchically results in estimating crowd density, and hierarchical clustering employs single linkage clustering. The approach presented in this paper conducts block-by-block processing of frames, and produces excellent results on a frame-by-frame basis. This is a new approach compared with existing approaches. Crowd events such as walking, running, merging, separating into groups (``splitting''), dispersing, and evacuating are critical to understanding crowd behavior. However, video data lie in a high-dimensional space, whereas events lie in a low-dimensional space. This thesis introduces a novel Optical Flow Manifolds (OFM) scheme to detect crowd events. Experiment results suggest that the proposed semi-supervised approach performs best in detecting merging, separating into group (``splitting''), and dispersion events compared with existing methods. The advantages of the semi-supervised approach are the requirement of a single parameter to detect crowd events, and results that are provided on a frame-by-frame basis. Crowd event detection requires information on the number of neighboring and incoming frames, which is difficult to estimate in advance. Therefore, crowd event detection needs adaptive schemes that can automatically detect events. This study presents a new adaptive crowd event detection approach using the OFM framework. To the best of our knowledge, this is the first study that reports adaptive crowd event detection. Experiment results suggest that the proposed approach accurately detects crowd events and is suitable for near real-time video surveillance systems based on the computational time it needs to detect events. Anomalous events in crowded videos need spatio-temporal localization of crowd events. Appropriate features and suitable coding of features result in accurate event localization. In this study, the proposed spatial and spatio-temporal coded features detect anomalous events. To the best of our knowledge, this is the first study that reports the detection of loitering people in a video. The approach helps manage crowds, for example, at stadiums, public transport hubs, pedestrian crossings, and other public places.
  • Item
    Thumbnail Image
    Three-dimensional intensity reconstruction in single-particle experiments: a spherical symmetry approach
    Flamant, Julien ( 2015)
    The ability to decipher the three-dimensional structures of biomolecules at high resolution will greatly improve our understanding of the biological machinery. To this aim, X-ray crystallography has been used by scientists for several decades with tremendous results. This imaging method however requires a crystal to be grown, and for most interesting biomolecules (proteins, viruses) this may not be possible. The single-particle experiment was proposed to address these limitations, and the recent advent of ultra-bright X-ray Free Electron Lasers (XFELs) opens a new set of opportunities in biomolecular imaging. In the single-particle experiment, thousands of diffraction patterns are recorded, where each image corresponds to an unknown, random orientation of individual copies of the biomolecule. These noisy, unoriented two-dimensional diffraction patterns need to be then assembled in three-dimensional space to form the three-dimensional intensity function, which characterizes completely the three-dimensional structure of the biomolecule. This work focuses on geometrical variations of an existing algorithm, the Expansion-Maximization-Compression (EMC) algorithm introduced by Loh and Elser. The algorithm relies upon an expec-tation-maximization method, by maximizing the likelihood of an intensity model with respect to the diffraction patterns. The contributions of this work are (i) the redefinition of the EMC algorithm in a spherical design, motivated by the intrinsic properties of the intensity function, (ii) the utilisation of an orthonormal harmonic basis on the three-dimensional ball which allows a sparse representation of the intensity function, (iii) the scaling of the EMC parameters with the desired resolution, increasing computational speed and (iv) the intensity error is analysed with respect to the EMC parameters.
  • Item
    Thumbnail Image
    Analysis of beat-to-beat ventricular repolarization duration variability from electrocardiogram signal
    IMAM, MOHAMMAD ( 2015)
    Electrocardiogram (ECG) signal analysis is a ubiquitous tool for investigating the heart’s function. ECG indicates the cardiac action potential propagation characteristics within the heart chambers (from the atria to the ventricles) and any irregularity in ECG, which can be graphically detected, represents abnormality in the polarization process (i.e. depolarization and repolarization) of the cardiac muscle cell. The lower chambers of the heart termed as ventricles perform the main pumping function by directing blood to the lungs and the peripheral system including the brain and all other body parts. Abnormality in ventricular function is critical, which can cause fatal cardiac diseases, where the heart loses its normal function to maintain proper circulation. Depolarization and repolarization process of the cardiac action potential activates the contraction and relaxation operations of the heart, whose durations can be detected from the temporal distance between different ECG waves (i.e. QRS duration, RR interval, QT interval). Abnormalities in these temporal durations calculated by the different time series variability measures indicate problems in normal cardiac muscle polarization process. Ventricular repolarization (VR) duration contains both the depolarization and repolarization durations, though the duration of depolarization is quite small in comparison to that of repolarization. Prolongation of VR duration from a normal baseline indicates the sign of ventricular dysfunction, which might initiate fatal ventricular arrhythmias (ventricular tachycardia and ventricular fibrillation). VR duration variability represented by QT interval time series variability (QTV) in ECG contains crucial information about the dynamics of VR process, which characterises the function of the ventricles. QTV is affected inherently by heart rate, respiration, autonomic nervous system, age, gender and different genetical disorder of cardiac ion channels. Therefore, variation of VR duration may be affected by several factors, which cannot be analysed properly by using gross time series variability measures (i.e. mean, standard deviation). This thesis investigates different QTV analysis techniques from QT interval time series extracted from ECG, which investigate how different physiological and pathological conditions affect the normal VR process and how this alteration can be used as a subclinical predictive analysis technique of different cardiac diseases. In this thesis, model based QTV analysis techniques were investigated and respiratory information based modelling approach is proposed for analysing dynamic QTV in healthy ageing and stressed condition. ECG derived respiration (EDR) was found a valid surrogate of respiration in modelling QTV, which provide only ECG based modelling technique for QTV by removing the need for collecting respiration signal separately. EDR based modelling was found very effective in describing QTV changes with denervation of ANS branches (parasympathetic and sympathetic) in a prevalent complexity in diabetic patients (Cardiac autonomic neuropathy (CAN)). These findings can describe the effect of ANS modulation on QTV, which is important for validating QTV as a non-invasive measure of sympathetic nervous system modulation on the ventricles. A novel approach describing systolic and diastolic time interval interaction derived from the VR duration (i.e. QT interval) and cardiac cycle duration (i.e. RR interval) in ECG was found very effective in subclinical CAN detection and CAN progression. This finding proves the feasibility of ECG based VR duration based measures in analysing left ventricular function of blood circulation. A novel beat-to-beat QT-RR interaction analysis technique was developed, which was found very useful in analysing age related alteration in the normal VR process. The proposed measure can also be used for determining the QTV component that is not affected directly by the RR intervals (i.e. QTV component independent of heart rate variability), which is more sensitive to the sympathetic modulation of the ventricles. Moreover, this technique showed promising results in the analysis of dynamical QTV changes before arrhythmogenesis, which can be used for predictive analysis of ventricular arrhythmias. Finally, the proposed technique for QTV analysis in this thesis will help to design low-cost and effective ECG based ambulatory care system that can be used for subclinical cardiovascular disease detection.
  • Item
    Thumbnail Image
    Planar nanoelectronic devices and biosensors using two-dimensional nanomaterials
    AL-DIRINI, FERAS MOHAMAD ( 2015)
    Graphene, a monolayer of carbon atoms and the first two-dimensional (2D) material to be isolated, has sparked great excitement and vast opportunities in the global research community. Its isolation led to the discovery of a new family of materials that are completely 2D, each of which exhibits unique properties in its own right. Such a wide range of new nanomaterials in a completely unexplored 2D platform offers a potential treasure for the electronics industry, which is yet to be explored. However, after more than a decade of research, nanoelectronic devices based on 2D nanomaterials have not yet met the high expectations set for them by the electronics industry. This thesis hopes to drive these efforts forward by proposing a different approach for the conceptualization of nanoelectronic devices, in light of the new opportunities offered by 2D nanomaterials. The proposed approach is centred on exploiting the truly unique property of two-dimensionality, which defines and distinguishes this exciting family of 2D nanomaterials, for the realization of completely 2D planar nanoelectronic devices. Less reliance is made on individual properties that are unique to individual 2D nanomaterials, however, wherever possible; such properties are exploited in enhancing the performance of the proposed devices. The proposed approach is applied to the conceptualization of a number of planar nanoelectronic devices that have a potential in a range of direct as well as long term envisioned applications, complementing conventional electronics on the short term but also having the potential to revolutionize electronics on the long term. All of the proposed devices are planar, completely 2D and realizable within a single 2D monolayer, reducing the required number of processing steps and enabling extreme miniaturization and CMOS compatibility. For the first time, a 2D Graphene Self-Switching Diode (G-SSD) is proposed and investigated, showing promising potential as a nanoscale rectifier. By exploiting some of graphene’s unique properties, the G-SSD is transformed into different types of planar devices that can achieve rectification, Negative Differential Resistance (NDR) operation and tunable biosensing. The extension of the proposed approach to other types of 2D nanomaterials is also investigated, by exploring the implementation of SSDs using MoS2 and Silicene. Finally, new classes of graphene resonant tunneling diodes (RTDs), with completely 2D planar architectures, are proposed, showing unique transport properties and with promising performance, while requiring minimal process steps during fabrication.
  • Item
    Thumbnail Image
    Neural correlates of consciousness and communication in disorders of consciousness
    Liang, Xingwen ( 2015)
    It is difficult to distinguish disorders of consciousness from certain disorders of communication for vegetative and minimally conscious patients who suffer from impairment of awareness and cannot produce reliable behavioural output. This thesis reviews some previous neuroimaging studies on mental imagery and brain injured patients, and presents a functional magnetic resonance imaging (fMRI) study of five patients that seeks to extend communication with them through asking them to answer simple questions with ‘yes’, ‘no’ or ‘I don’t know’ answers by performing mental imagery tasks of ‘playing tennis’, ‘navigating the home’, ‘imagining familiar faces’, and ‘counting up from 10 by 7s’. Consideration is given to how each individual’s activation map deviates from the control group map and a quantitative method of overlap, the percent overlap metric A, to classify the deviations is proposed. Promising results were found on controls with this method to infer which imagery task had been done. The full results of three tests for each participant are reported: speech comprehension capacity, mental imagery, and question-answer. Specific brain activations were observed in the first two tests: the posterior parts of superior and middle temporal cortices for ‘sentences’ in the language test; the paraphippocampal area and premotor area for navigation, superior parietal cortex and premotor area for tennis; lateral prefrontal (BA44,45), intra-parietal sulcus, and superior parietal areas for counting; frontal orbital cortex, left Broca’s area 44, and right Broca’s area 45 for faces in the imagery test. In the question-answer test, most of tennis or navigation tasks could be identified correctly when employed while answering as measured by the metric A. Although some patients produced activations in similar areas to controls for certain tasks, only two minimally conscious patients showed significant activation changes as judged by the fMRI time series for some tasks. The activation maps observed for two patients with 1.5T MRI provide independent support to the work from other groups (at 3T) on finding patients with a disorder of consciousness who can perform mental imagery tasks, which suggests broader clinical utility for the tests presented here. Given the control participant results for the mental imagery and question-answers tasks, it should be possible to at least work with locked-in patients at 1.5T. An original contribution includes consideration of the task of mental calculation. The evidence for specific pattern for counting task is provided for a group of 11 healthy participants.