Electrical and Electronic Engineering - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 16
  • Item
    Thumbnail Image
    A Bayesian signal processing framework for dual polarized weather radar
    Samarasekera, Senaka ( 2015)
    Current Weather radar algorithms for rain micro-physical parameter estimation do not make optimal use of the micro-physical models that govern the rain drop states. This leads to an increased uncertainty at higher radar resolutions, and the underline assumptions of these models can be inconsistent. In this thesis, I design and implement a non-linear filter that estimates the static micro-physical parameters of rain using dual polarised radar returns.I model the radar returns as electromagnetic backscattering from random anisotropic medium, and introduce a likelihood function that enables joint estimation of the rain micro-physical parameters. I then look at identifiability constraints of this model and well-condition it using the Bayesian framework. The filter takes the form of a Rao-Blackwellized sequential Monte -Carlo sampler. The filter convergence is achieved with the use of independent Metropolis-Hasting move kernels and progressive correction. Application of the filter on rain-storm data suggests it can give higher resolution estimates at increased precision.
  • Item
    Thumbnail Image
    Modeling fetal cardiac valve intervals and fetal-maternal interactions
    Marzbanrad, Faezeh ( 2015)
    Despite the advances in fetal healthcare, in Australia around 9-10 out of 1000 babies die in perinatal period, which is defined as starting from 22 weeks of pregnancy and extending to the first week after birth. This mortality rate is three to four times higher in some developing countries. Furthermore, false alarms produced by the current fetal surveillance technology impose unnecessary interventions, which involve additional costs and potential maternal and fetal risks. Therefore there is a critical need for more accurate fetal assessment methods for reliable identification of fetal risks. Fetal heart assessment is one of the main concerns in fetal healthcare and provides significant information about the fetal development and well-being. The aim of this research is to develop automated and accurate fetal heart assessment methods using noninvasive and less specialized techniques. In this research, automated methods were developed for estimation of the fetal cardiac valve intervals which are fundamental and clinically significant part of the fetal heart physiology. For this purpose simultaneous recordings of one dimensional Doppler Ultrasound (1-D DUS) signal and noninvasive fetal Electrocardiography (fECG) were used. New methods were developed for decomposition of the DUS signal into the component manifesting the valves' motion. Opening and closing of the valves were then identified automatically based on the features of the DUS component, their temporal order and duration from the R-peak of fECG. Result of evaluating the cardiac intervals over healthy gestational ages and in heart anomaly cases, showed evidences of their effectiveness in assessing fetal development and well-being. Fetal heart activity is influenced by not only the fetal conditions and maturation, but also the maternal psychological and physiological conditions. Therefore this research also focused on the relationship between maternal and fetal heart rates. To this aim, a model-free method based on Transfer Entropy (TE) was used to quantify directed interactions between maternal and fetal heart rates at various time delays and gestational ages. The changes of the coupling throughout gestation provided detailed information on the fetal-maternal relationship, which can provide novel clinical markers of healthy versus pathological fetal development.
  • Item
    Thumbnail Image
    Energy efficient wireless system design
    Kudavithana, Dinuka ( 2015)
    The demand for telecommunication networks is increasing rapidly. Wireless access is a major contributor to this trend. On the other hand, wireless is considered as a least energy efficient transmission medium mainly due to its unguided nature. The general focus of increasing wireless system energy efficiency is on reduction of the transmit power. However, this strategy may not save energy in short distance communication systems as the processing energy in hardware becomes more significant compared to the transmit radio energy. This thesis focuses on looking at the energy consumption of wireless systems by modeling the energy consumption as a function of several parameters such as receiver SNR, RF bandwidth, information rate, modulation scheme and code rate. We propose energy models for synchronization systems and other digital signal processing modules by considering the computational complexity of the algorithm and the required circuitry. Initially we focus on the synchronization aspects of wireless receivers. We study various algorithms on symbol timing recovery, carrier frequency recovery and carrier phase recovery and compare the performance in order to identify the suitable algorithms to operate at different SNR regions. We then develop energy models for those synchronization sub-systems by analyzing the computational complexity of circuitries based on a number of arithmetic, logic and memory operations. We define a new metric - energy consumption to achieve a given performance as a function of SNR - in order to compare the energy efficiency of different estimation algorithms. Next, we investigate the energy-efficiency trade-offs of a point-to-point wireless system by developing energy models of both the transmitter and receiver that include practical aspects such as error control coding, synchronization and channel equalization. In our system, a multipath Rayleigh-fading channel model and a low-density parity check (LDPC) coding scheme are chosen. We then develop a closed-form approximation for the total energy consumption as a function of receiver SNR and use it to find a minimum-energy transmission configuration. The results reveal that low SNR operation (i.e. low transmit power) is not always the most energy efficient strategy, especially in short distance communication. We present an optimal-SNR concept which can save a significant amount of energy mainly in short-range transmission systems. We then focus on cooperative relay systems. We investigate the energy efficiency trade-offs of single--relay networks by developing energy models for two relay strategies: amplify-and-forward (AF) and detect-and-forward (DF). We then optimize the location and power allocation of the relay to minimize the total energy consumption. The optimum location is found in two-dimensional space for constrained and unconstrained scenarios. We then optimize the total energy consumption over the spectral efficiency and derive expressions for the optimal spectral efficiency values. We use numerical simulations to verify our results. Finally, we focus on energy efficiency of multi-relay systems by considering a dual-relay cooperative system using DF protocol with full diversity. We propose a location-and-power-optimization approach for the relays to minimize the transmit radio energy. We then minimize the total system energy from spectral efficiency perspective for two scenarios: throughput-constrained and bandwidth-constrained configurations. Our proposed approach reduces the transmit energy consumption compared to an equal-power allocated and equidistant-located relay system. Finally, we present an optimal transmission scheme as a function of distance by considering single-hop and multi-hop schemes. The overall results imply that more relays are required as the transmission distance increases in order to maintain a higher energy efficiency.
  • Item
    Thumbnail Image
    Fundamental energy requirements of information processing and transmission
    Angley, Daniel Michael ( 2015)
    This thesis investigates fundamental limits on the energy required to process and transmit information. By combining physical laws, such as the second law of thermodynamics, with information theory, we present novel limits on the efficiency of systems that track objects, perform stochastic control, switch communication systems and communicate information. This approach yields results that apply regardless of how the system is constructed. While the energy required to perform an ideal measurement of a static state has no known lower bound, this thesis demonstrates that this is not true for noisy measurements or if the state is evolving stochastically. We derive new lower bounds on the energy required to perform such tracking tasks, including Kalman filtering. The goal of stochastic control is usually to reduce the entropy of the controlled system. This is also the task of a Maxwell demon, a thought experiment in which a device or being reduces the thermodynamic entropy of a closed system, violating the second law of thermodynamics. We demonstrate that the same arguments that `exorcise' Maxwell's demon can be used to find lower bounds on the energy consumption of stochastic controllers. We show that the configuration of a switching system in communications, that directs input signals to the desired outputs, can be used to store information. Reconfiguring the switch therefore erases information, and must have an energy cost of at least $k_B T \ln(2)$ per bit due to Landauer's principle. We then calculate lower bounds on the energy required to perform finite-time switching in a one-input, two-output MEMS (microelectromechanical system) mirror switch subject to Brownian motion, demonstrating that the shape of the potential that the switch is subject to affects both the steady-state noise and the energy required to change the configuration. Finally, by modifying Feynman's ratchet and pawl heat engine in order to perform communication instead of doing work, we investigate the efficiency of communication systems that operate solely using the temperature difference between two thermal reservoirs. The lower bound for the energy consumption of any communication system operating between two thermal reservoirs, with no channel noise and using equiprobable partitions of heat energy taken from these reservoirs, is found to be $\frac{T_H T_C}{T_H-T_C} k_B \ln(2)$, where $T_H$ and $T_C$ are the temperatures of the hot and cold reservoir, and $k_B$ is Boltzmann's constant.
  • Item
    Thumbnail Image
    Energy consumption of cloud computing and fog computing applications
    Jalali, Fatemeh ( 2015)
    A great deal of attention has been paid to the energy consumption of Cloud services and data centers in an endeavor to reduce the energy consumption and carbon footprint of the ICT industry. Since the data in Cloud services is processed and stored in data centers, an obvious focus for studying energy consumption of Cloud services is the data centers. However, the energy consumption of a Cloud service is not just due to data centers, it also includes energy consumption of the transport network that connects end-users to the Cloud and the energy consumption of end-user devices when accessing the Cloud. In most of previous studies on energy consumption of Cloud computing services, the energy consumed in the transport network and end-user devices has not taken into account. To show the importance of energy consumption of these ignored parts, the total energy consumed by three well-known Cloud applications, Facebook, Google Drive and Microsoft OneDrive, is studied using measurements and modeling. The results show that achieving an energy-efficient Cloud service requires improving the energy efficiency of the transport network and the end-user devices along with the related data centers. The popularity of hosting and distributing content and applications from small servers located in end-user premises (known as nano data centers) is increasing especially with the advent of Internet of Things (IoT) and the Fog Computing paradigm. In this work we study energy consumption of nano data centers since there are different views on the energy consumption of nano data centers. These differences stem from using different energy consumption models and ignoring energy consumed in the transport network. To fill the knowledge gap in this field, we propose established and measurement based models for network topology and energy consumption to identify parameters that make nano data centers more/less energy-efficient than centralized data centers. A number of findings emerge from this study, including the factors that enable nano data centers to consume less energy than its centralized counterpart, such as (a) type of access network attached to nano servers, (b) the ratio of nano server’s idle time to active time and, (c) type of applications which includes number of downloads, updates and data pre-loading. This study shows that nano data centers can complement centralized data centers and lead to energy savings for applications that are off-loadable from centralized data centers to nano data centers.
  • Item
    Thumbnail Image
    Crowd behavior analysis using video analytics
    Rao, Aravinda Sridhara ( 2015)
    Crowd analysis is a critical problem in understanding crowd behavior for surveillance applications. The current practice is manually scanning video feeds from several sources. Video analytics allows the automatic detection of events of interest, but it faces many challenges because of non-rigid crowd motions and occlusions. The algorithms developed for rigid objects are ineffectual in managing crowds. This study describes the optical flow-based video analytics for crowd analysis and applications include people counting, density estimation, event detection, and abnormal event detection. There are two main approaches to detecting objects in a video. First, the background modeling approach models the scene background. Modeled pixel values represent the scene, and each pixel value determines whether it belongs to the background or foreground. The second method provides motion information by estimating an object's motions. Articulated actions and sudden movements of people limit background modeling. Therefore, this thesis uses motion estimation to detect objects. Crowd density estimation is important for understanding crowd behavior. Optical flow features provide motion information on objects, and refining these features using spatial filters produce motion cues that signal the presence of people. Clustering the motion cues hierarchically results in estimating crowd density, and hierarchical clustering employs single linkage clustering. The approach presented in this paper conducts block-by-block processing of frames, and produces excellent results on a frame-by-frame basis. This is a new approach compared with existing approaches. Crowd events such as walking, running, merging, separating into groups (``splitting''), dispersing, and evacuating are critical to understanding crowd behavior. However, video data lie in a high-dimensional space, whereas events lie in a low-dimensional space. This thesis introduces a novel Optical Flow Manifolds (OFM) scheme to detect crowd events. Experiment results suggest that the proposed semi-supervised approach performs best in detecting merging, separating into group (``splitting''), and dispersion events compared with existing methods. The advantages of the semi-supervised approach are the requirement of a single parameter to detect crowd events, and results that are provided on a frame-by-frame basis. Crowd event detection requires information on the number of neighboring and incoming frames, which is difficult to estimate in advance. Therefore, crowd event detection needs adaptive schemes that can automatically detect events. This study presents a new adaptive crowd event detection approach using the OFM framework. To the best of our knowledge, this is the first study that reports adaptive crowd event detection. Experiment results suggest that the proposed approach accurately detects crowd events and is suitable for near real-time video surveillance systems based on the computational time it needs to detect events. Anomalous events in crowded videos need spatio-temporal localization of crowd events. Appropriate features and suitable coding of features result in accurate event localization. In this study, the proposed spatial and spatio-temporal coded features detect anomalous events. To the best of our knowledge, this is the first study that reports the detection of loitering people in a video. The approach helps manage crowds, for example, at stadiums, public transport hubs, pedestrian crossings, and other public places.
  • Item
    Thumbnail Image
    Analysis of beat-to-beat ventricular repolarization duration variability from electrocardiogram signal
    IMAM, MOHAMMAD ( 2015)
    Electrocardiogram (ECG) signal analysis is a ubiquitous tool for investigating the heart’s function. ECG indicates the cardiac action potential propagation characteristics within the heart chambers (from the atria to the ventricles) and any irregularity in ECG, which can be graphically detected, represents abnormality in the polarization process (i.e. depolarization and repolarization) of the cardiac muscle cell. The lower chambers of the heart termed as ventricles perform the main pumping function by directing blood to the lungs and the peripheral system including the brain and all other body parts. Abnormality in ventricular function is critical, which can cause fatal cardiac diseases, where the heart loses its normal function to maintain proper circulation. Depolarization and repolarization process of the cardiac action potential activates the contraction and relaxation operations of the heart, whose durations can be detected from the temporal distance between different ECG waves (i.e. QRS duration, RR interval, QT interval). Abnormalities in these temporal durations calculated by the different time series variability measures indicate problems in normal cardiac muscle polarization process. Ventricular repolarization (VR) duration contains both the depolarization and repolarization durations, though the duration of depolarization is quite small in comparison to that of repolarization. Prolongation of VR duration from a normal baseline indicates the sign of ventricular dysfunction, which might initiate fatal ventricular arrhythmias (ventricular tachycardia and ventricular fibrillation). VR duration variability represented by QT interval time series variability (QTV) in ECG contains crucial information about the dynamics of VR process, which characterises the function of the ventricles. QTV is affected inherently by heart rate, respiration, autonomic nervous system, age, gender and different genetical disorder of cardiac ion channels. Therefore, variation of VR duration may be affected by several factors, which cannot be analysed properly by using gross time series variability measures (i.e. mean, standard deviation). This thesis investigates different QTV analysis techniques from QT interval time series extracted from ECG, which investigate how different physiological and pathological conditions affect the normal VR process and how this alteration can be used as a subclinical predictive analysis technique of different cardiac diseases. In this thesis, model based QTV analysis techniques were investigated and respiratory information based modelling approach is proposed for analysing dynamic QTV in healthy ageing and stressed condition. ECG derived respiration (EDR) was found a valid surrogate of respiration in modelling QTV, which provide only ECG based modelling technique for QTV by removing the need for collecting respiration signal separately. EDR based modelling was found very effective in describing QTV changes with denervation of ANS branches (parasympathetic and sympathetic) in a prevalent complexity in diabetic patients (Cardiac autonomic neuropathy (CAN)). These findings can describe the effect of ANS modulation on QTV, which is important for validating QTV as a non-invasive measure of sympathetic nervous system modulation on the ventricles. A novel approach describing systolic and diastolic time interval interaction derived from the VR duration (i.e. QT interval) and cardiac cycle duration (i.e. RR interval) in ECG was found very effective in subclinical CAN detection and CAN progression. This finding proves the feasibility of ECG based VR duration based measures in analysing left ventricular function of blood circulation. A novel beat-to-beat QT-RR interaction analysis technique was developed, which was found very useful in analysing age related alteration in the normal VR process. The proposed measure can also be used for determining the QTV component that is not affected directly by the RR intervals (i.e. QTV component independent of heart rate variability), which is more sensitive to the sympathetic modulation of the ventricles. Moreover, this technique showed promising results in the analysis of dynamical QTV changes before arrhythmogenesis, which can be used for predictive analysis of ventricular arrhythmias. Finally, the proposed technique for QTV analysis in this thesis will help to design low-cost and effective ECG based ambulatory care system that can be used for subclinical cardiovascular disease detection.
  • Item
    Thumbnail Image
    Planar nanoelectronic devices and biosensors using two-dimensional nanomaterials
    AL-DIRINI, FERAS MOHAMAD ( 2015)
    Graphene, a monolayer of carbon atoms and the first two-dimensional (2D) material to be isolated, has sparked great excitement and vast opportunities in the global research community. Its isolation led to the discovery of a new family of materials that are completely 2D, each of which exhibits unique properties in its own right. Such a wide range of new nanomaterials in a completely unexplored 2D platform offers a potential treasure for the electronics industry, which is yet to be explored. However, after more than a decade of research, nanoelectronic devices based on 2D nanomaterials have not yet met the high expectations set for them by the electronics industry. This thesis hopes to drive these efforts forward by proposing a different approach for the conceptualization of nanoelectronic devices, in light of the new opportunities offered by 2D nanomaterials. The proposed approach is centred on exploiting the truly unique property of two-dimensionality, which defines and distinguishes this exciting family of 2D nanomaterials, for the realization of completely 2D planar nanoelectronic devices. Less reliance is made on individual properties that are unique to individual 2D nanomaterials, however, wherever possible; such properties are exploited in enhancing the performance of the proposed devices. The proposed approach is applied to the conceptualization of a number of planar nanoelectronic devices that have a potential in a range of direct as well as long term envisioned applications, complementing conventional electronics on the short term but also having the potential to revolutionize electronics on the long term. All of the proposed devices are planar, completely 2D and realizable within a single 2D monolayer, reducing the required number of processing steps and enabling extreme miniaturization and CMOS compatibility. For the first time, a 2D Graphene Self-Switching Diode (G-SSD) is proposed and investigated, showing promising potential as a nanoscale rectifier. By exploiting some of graphene’s unique properties, the G-SSD is transformed into different types of planar devices that can achieve rectification, Negative Differential Resistance (NDR) operation and tunable biosensing. The extension of the proposed approach to other types of 2D nanomaterials is also investigated, by exploring the implementation of SSDs using MoS2 and Silicene. Finally, new classes of graphene resonant tunneling diodes (RTDs), with completely 2D planar architectures, are proposed, showing unique transport properties and with promising performance, while requiring minimal process steps during fabrication.
  • Item
    Thumbnail Image
    Differential changes in synaptic inputs to ON & OFF retinal ganglion cells during retinal degeneration
    Saha, Susmita ( 2015)
    Retinitis pigmentosa (RP) is a family of inherited retinal degenerations caused by the gradual loss of the light sensitive cells in the retina, known as photoreceptors. Without the ability to transduce light energy into biological signals, RP eventually leads to complete blindness, even though many of the non-light sensitive elements of the retinal nervous system remain electrically active. Prosthetic retinal implants, which convert the output from digital cameras into patterns of electrical stimulation that can activate the retina, have successfully restored some useful vision to blind patients that have RP. However, many challenges remain before retinal prostheses are fully mature, i.e. capable of mimicking normal retinal function. One major issue is to be able to differentially stimulate the ON and OFF retinal ganglion cells (RGCs), which transmit retinal information between the eye and brain. It is important to know the functional conditions of these two types of cells in RP, especially at the advanced stages of the disease. The overall aim of my PhD project was to demonstrate the differential effect of complete photoreceptor loss on the synapse density, synaptic currents and spiking activities of ON and OFF retinal ganglion cells using the rd1 mouse as a model of degeneration. I used immunolabeling, image analysis and whole-cell patch clamp techniques to answer my questions for this study. The immunohistochemical studies in the first part of the project revealed that at the stage of complete photoreceptor loss, ON-RGCs showed a significant reduction in excitatory synapse density while OFF-RGCs showed a significant reduction in the inhibitory synapse density. Distribution patterns of both the excitatory and inhibitory synapses across the dendritic fields of RGCs were unchanged. The change in synaptic input was associated with a reduction in the density of ON-cone bipolar cells. In the second part, patch clamp electrophysiology experiments conducted on the same cell types in the same mouse model at the same stage of degeneration revealed a significant reduction in spontaneous excitatory post-synaptic current (sEPSC) frequencies and a significant upregulation in spontaneous inhibitory post-synaptic current (sIPSC) frequency in ON-RGCs, but not in OFF-RGCs. OFF-RGCs only showed significantly higher amplitude EPSC events. Finally, we found a strong correlation between the results in the first and second part of this study, which suggests that the ON and OFF ganglion cells are differentially affected by complete photoreceptor loss in RP.
  • Item
    Thumbnail Image
    Optimisation of energy efficiency in communication networks
    LIN, TAO ( 2015)
    The mobile data traffic is experiencing unprecedented growth due to the rapid proliferation of devices such as smart phones and tablets. Improving the efficiency of mobile networks, both in terms of traffic flow and energy consumption, is thus critical for sustaining this growing demand. While the adoption of new technologies such as small cell networks and cognitive radio reduces deployment and operational costs, challenges remain regarding how the data traffic can be efficiently processed and transported over the mobile backhaul network. The first aim of this study is to improve the energy efficiency of mobile backhaul networks, while simultaneously balancing the traffic load on its various backhaul nodes, in order to maintain required service quality. First a multi-objective optimisation problem is formulated, then a distributed algorithm is proposed to solve it. The theoretical analysis and numerical simulations demonstrate the results. It is shown that the traffic diurnal cycle poses notable challenges for operators to plan, design and operate mobile backhaul networks so as to achieve desired energy-performance tradeoffs. Continuing growth in cloud-based services and global IP traffic necessitates performance improvements in energy consumption, network delay and service availability. Data centres providing cloud services and transport networks have often multiple stakeholders, which makes it difficult to implement centralised traffic management. The second aim of this study is to apply a game-theoretic approach to data traffic management to obtain a distributed and energy-efficient solution, where each edge router is acting as a strategic player. A multi-objective optimisation problem with a-priori user-specific preferences is formulated for each player and a distributed iterative algorithm is proposed to solve the game. The existence of Nash Equilibrium (NE) of the proposed game is proven followed by the theoretical convergence analysis of the iterative algorithm. The efficiency loss between the strategic game and corresponding global optimisation method is analysed to quantify the impact of selfish behaviour on the overall system performance. Simulation results show notable challenges for operators to plan, design and operate a multimedia content network in order to optimise energy consumption, network delay and load balance over a diurnal cycle. The third aim of this study is to develop an optimisation framework for energy efficiency of optical core networks using Software Defined Networking (SDN). A general system model is proposed where switch-off/sleep mode is introduced to model the power consumption of individual network devices. A multi-objective optimisation problem is formulated by considering system power consumption, server load balance and transport network latency. To demonstrate the problem, a generic Software Defined Networking model is implemented in the Mininet platform by leveraging the OpenFlow protocol. A core network topology is studied in the Mininet framework with various parameter configurations. The simulation results show network topology, traffic diurnal cycle and user Quality of Service (QoS) requirements pose notable challenges for network plan, design and operation so as to achieve the desired energy-performance tradeoffs.