# Electrical and Electronic Engineering - Theses

## Search Results

Now showing 1 - 10 of 20
• Item
A Bayesian signal processing framework for dual polarized weather radar
Samarasekera, Senaka ( 2015)
Current Weather radar algorithms for rain micro-physical parameter estimation do not make optimal use of the micro-physical models that govern the rain drop states. This leads to an increased uncertainty at higher radar resolutions, and the underline assumptions of these models can be inconsistent. In this thesis, I design and implement a non-linear filter that estimates the static micro-physical parameters of rain using dual polarised radar returns.I model the radar returns as electromagnetic backscattering from random anisotropic medium, and introduce a likelihood function that enables joint estimation of the rain micro-physical parameters. I then look at identifiability constraints of this model and well-condition it using the Bayesian framework. The filter takes the form of a Rao-Blackwellized sequential Monte -Carlo sampler. The filter convergence is achieved with the use of independent Metropolis-Hasting move kernels and progressive correction. Application of the filter on rain-storm data suggests it can give higher resolution estimates at increased precision.
• Item
Modeling fetal cardiac valve intervals and fetal-maternal interactions
Despite the advances in fetal healthcare, in Australia around 9-10 out of 1000 babies die in perinatal period, which is defined as starting from 22 weeks of pregnancy and extending to the first week after birth. This mortality rate is three to four times higher in some developing countries. Furthermore, false alarms produced by the current fetal surveillance technology impose unnecessary interventions, which involve additional costs and potential maternal and fetal risks. Therefore there is a critical need for more accurate fetal assessment methods for reliable identification of fetal risks. Fetal heart assessment is one of the main concerns in fetal healthcare and provides significant information about the fetal development and well-being. The aim of this research is to develop automated and accurate fetal heart assessment methods using noninvasive and less specialized techniques. In this research, automated methods were developed for estimation of the fetal cardiac valve intervals which are fundamental and clinically significant part of the fetal heart physiology. For this purpose simultaneous recordings of one dimensional Doppler Ultrasound (1-D DUS) signal and noninvasive fetal Electrocardiography (fECG) were used. New methods were developed for decomposition of the DUS signal into the component manifesting the valves' motion. Opening and closing of the valves were then identified automatically based on the features of the DUS component, their temporal order and duration from the R-peak of fECG. Result of evaluating the cardiac intervals over healthy gestational ages and in heart anomaly cases, showed evidences of their effectiveness in assessing fetal development and well-being. Fetal heart activity is influenced by not only the fetal conditions and maturation, but also the maternal psychological and physiological conditions. Therefore this research also focused on the relationship between maternal and fetal heart rates. To this aim, a model-free method based on Transfer Entropy (TE) was used to quantify directed interactions between maternal and fetal heart rates at various time delays and gestational ages. The changes of the coupling throughout gestation provided detailed information on the fetal-maternal relationship, which can provide novel clinical markers of healthy versus pathological fetal development.
• Item
Energy efficient wireless system design
Kudavithana, Dinuka ( 2015)
The demand for telecommunication networks is increasing rapidly. Wireless access is a major contributor to this trend. On the other hand, wireless is considered as a least energy efficient transmission medium mainly due to its unguided nature. The general focus of increasing wireless system energy efficiency is on reduction of the transmit power. However, this strategy may not save energy in short distance communication systems as the processing energy in hardware becomes more significant compared to the transmit radio energy. This thesis focuses on looking at the energy consumption of wireless systems by modeling the energy consumption as a function of several parameters such as receiver SNR, RF bandwidth, information rate, modulation scheme and code rate. We propose energy models for synchronization systems and other digital signal processing modules by considering the computational complexity of the algorithm and the required circuitry. Initially we focus on the synchronization aspects of wireless receivers. We study various algorithms on symbol timing recovery, carrier frequency recovery and carrier phase recovery and compare the performance in order to identify the suitable algorithms to operate at different SNR regions. We then develop energy models for those synchronization sub-systems by analyzing the computational complexity of circuitries based on a number of arithmetic, logic and memory operations. We define a new metric - energy consumption to achieve a given performance as a function of SNR - in order to compare the energy efficiency of different estimation algorithms. Next, we investigate the energy-efficiency trade-offs of a point-to-point wireless system by developing energy models of both the transmitter and receiver that include practical aspects such as error control coding, synchronization and channel equalization. In our system, a multipath Rayleigh-fading channel model and a low-density parity check (LDPC) coding scheme are chosen. We then develop a closed-form approximation for the total energy consumption as a function of receiver SNR and use it to find a minimum-energy transmission configuration. The results reveal that low SNR operation (i.e. low transmit power) is not always the most energy efficient strategy, especially in short distance communication. We present an optimal-SNR concept which can save a significant amount of energy mainly in short-range transmission systems. We then focus on cooperative relay systems. We investigate the energy efficiency trade-offs of single--relay networks by developing energy models for two relay strategies: amplify-and-forward (AF) and detect-and-forward (DF). We then optimize the location and power allocation of the relay to minimize the total energy consumption. The optimum location is found in two-dimensional space for constrained and unconstrained scenarios. We then optimize the total energy consumption over the spectral efficiency and derive expressions for the optimal spectral efficiency values. We use numerical simulations to verify our results. Finally, we focus on energy efficiency of multi-relay systems by considering a dual-relay cooperative system using DF protocol with full diversity. We propose a location-and-power-optimization approach for the relays to minimize the transmit radio energy. We then minimize the total system energy from spectral efficiency perspective for two scenarios: throughput-constrained and bandwidth-constrained configurations. Our proposed approach reduces the transmit energy consumption compared to an equal-power allocated and equidistant-located relay system. Finally, we present an optimal transmission scheme as a function of distance by considering single-hop and multi-hop schemes. The overall results imply that more relays are required as the transmission distance increases in order to maintain a higher energy efficiency.
• Item
Fundamental energy requirements of information processing and transmission
Angley, Daniel Michael ( 2015)
This thesis investigates fundamental limits on the energy required to process and transmit information. By combining physical laws, such as the second law of thermodynamics, with information theory, we present novel limits on the efficiency of systems that track objects, perform stochastic control, switch communication systems and communicate information. This approach yields results that apply regardless of how the system is constructed. While the energy required to perform an ideal measurement of a static state has no known lower bound, this thesis demonstrates that this is not true for noisy measurements or if the state is evolving stochastically. We derive new lower bounds on the energy required to perform such tracking tasks, including Kalman filtering. The goal of stochastic control is usually to reduce the entropy of the controlled system. This is also the task of a Maxwell demon, a thought experiment in which a device or being reduces the thermodynamic entropy of a closed system, violating the second law of thermodynamics. We demonstrate that the same arguments that exorcise' Maxwell's demon can be used to find lower bounds on the energy consumption of stochastic controllers. We show that the configuration of a switching system in communications, that directs input signals to the desired outputs, can be used to store information. Reconfiguring the switch therefore erases information, and must have an energy cost of at least $k_B T \ln(2)$ per bit due to Landauer's principle. We then calculate lower bounds on the energy required to perform finite-time switching in a one-input, two-output MEMS (microelectromechanical system) mirror switch subject to Brownian motion, demonstrating that the shape of the potential that the switch is subject to affects both the steady-state noise and the energy required to change the configuration. Finally, by modifying Feynman's ratchet and pawl heat engine in order to perform communication instead of doing work, we investigate the efficiency of communication systems that operate solely using the temperature difference between two thermal reservoirs. The lower bound for the energy consumption of any communication system operating between two thermal reservoirs, with no channel noise and using equiprobable partitions of heat energy taken from these reservoirs, is found to be $\frac{T_H T_C}{T_H-T_C} k_B \ln(2)$, where $T_H$ and $T_C$ are the temperatures of the hot and cold reservoir, and $k_B$ is Boltzmann's constant.
• Item
Energy consumption of cloud computing and fog computing applications
Jalali, Fatemeh ( 2015)
• Item
Crowd behavior analysis using video analytics
Rao, Aravinda Sridhara ( 2015)
Crowd analysis is a critical problem in understanding crowd behavior for surveillance applications. The current practice is manually scanning video feeds from several sources. Video analytics allows the automatic detection of events of interest, but it faces many challenges because of non-rigid crowd motions and occlusions. The algorithms developed for rigid objects are ineffectual in managing crowds. This study describes the optical flow-based video analytics for crowd analysis and applications include people counting, density estimation, event detection, and abnormal event detection. There are two main approaches to detecting objects in a video. First, the background modeling approach models the scene background. Modeled pixel values represent the scene, and each pixel value determines whether it belongs to the background or foreground. The second method provides motion information by estimating an object's motions. Articulated actions and sudden movements of people limit background modeling. Therefore, this thesis uses motion estimation to detect objects. Crowd density estimation is important for understanding crowd behavior. Optical flow features provide motion information on objects, and refining these features using spatial filters produce motion cues that signal the presence of people. Clustering the motion cues hierarchically results in estimating crowd density, and hierarchical clustering employs single linkage clustering. The approach presented in this paper conducts block-by-block processing of frames, and produces excellent results on a frame-by-frame basis. This is a new approach compared with existing approaches. Crowd events such as walking, running, merging, separating into groups (splitting''), dispersing, and evacuating are critical to understanding crowd behavior. However, video data lie in a high-dimensional space, whereas events lie in a low-dimensional space. This thesis introduces a novel Optical Flow Manifolds (OFM) scheme to detect crowd events. Experiment results suggest that the proposed semi-supervised approach performs best in detecting merging, separating into group (`splitting''), and dispersion events compared with existing methods. The advantages of the semi-supervised approach are the requirement of a single parameter to detect crowd events, and results that are provided on a frame-by-frame basis. Crowd event detection requires information on the number of neighboring and incoming frames, which is difficult to estimate in advance. Therefore, crowd event detection needs adaptive schemes that can automatically detect events. This study presents a new adaptive crowd event detection approach using the OFM framework. To the best of our knowledge, this is the first study that reports adaptive crowd event detection. Experiment results suggest that the proposed approach accurately detects crowd events and is suitable for near real-time video surveillance systems based on the computational time it needs to detect events. Anomalous events in crowded videos need spatio-temporal localization of crowd events. Appropriate features and suitable coding of features result in accurate event localization. In this study, the proposed spatial and spatio-temporal coded features detect anomalous events. To the best of our knowledge, this is the first study that reports the detection of loitering people in a video. The approach helps manage crowds, for example, at stadiums, public transport hubs, pedestrian crossings, and other public places.
• Item
Three-dimensional intensity reconstruction in single-particle experiments: a spherical symmetry approach
Flamant, Julien ( 2015)
The ability to decipher the three-dimensional structures of biomolecules at high resolution will greatly improve our understanding of the biological machinery. To this aim, X-ray crystallography has been used by scientists for several decades with tremendous results. This imaging method however requires a crystal to be grown, and for most interesting biomolecules (proteins, viruses) this may not be possible. The single-particle experiment was proposed to address these limitations, and the recent advent of ultra-bright X-ray Free Electron Lasers (XFELs) opens a new set of opportunities in biomolecular imaging. In the single-particle experiment, thousands of diffraction patterns are recorded, where each image corresponds to an unknown, random orientation of individual copies of the biomolecule. These noisy, unoriented two-dimensional diffraction patterns need to be then assembled in three-dimensional space to form the three-dimensional intensity function, which characterizes completely the three-dimensional structure of the biomolecule. This work focuses on geometrical variations of an existing algorithm, the Expansion-Maximization-Compression (EMC) algorithm introduced by Loh and Elser. The algorithm relies upon an expec-tation-maximization method, by maximizing the likelihood of an intensity model with respect to the diffraction patterns. The contributions of this work are (i) the redefinition of the EMC algorithm in a spherical design, motivated by the intrinsic properties of the intensity function, (ii) the utilisation of an orthonormal harmonic basis on the three-dimensional ball which allows a sparse representation of the intensity function, (iii) the scaling of the EMC parameters with the desired resolution, increasing computational speed and (iv) the intensity error is analysed with respect to the EMC parameters.
• Item
Analysis of beat-to-beat ventricular repolarization duration variability from electrocardiogram signal