Electrical and Electronic Engineering - Theses
Now showing items 1-12 of 267
Investigating the Role of Residential PV Systems for Primary Frequency Regulation
The increasing penetration of residential photovoltaic (PV) systems is reducing net demand leading to displacement of synchronous generation, with serious implications on the provision of Primary Frequency Response (PFR) following a contingency. Furthermore, distribution networks require management of excessive reverse power flows caused by residential PV system to avoid voltage or asset utilisation violations. To prevent distribution network problems exports limit are often imposed, but at the sacrifice of total power exported. Through pre-curtailment of maximum generation, or re-distribution of power through dynamic optimal export limits, it is possible for residential PV systems to create a power reserve for PFR. Furthermore, time-varying net demand from residential PV will lead to many changing operating states, with implications on the oscillatory performance of synchronous generators still online. In this context, this thesis investigates and proposes methodologies to determine the role of residential PV systems in the provision of PFR and the effect of a time-varying net demand on small signal stability. To achieve this, however, several of the corresponding challenges need to be understood. Firstly, the effects at the system-level from an increase in PV penetration need to be understood. It is required to model how synchronous generators change power output to in response to a change in net demand. The dispatch of PFR for the synchronous generators must also be considered. Secondly, any pre-curtailment of a PV system for PFR, will alter the net demand and potential PFR requirements. Thirdly, residential PV systems are connected to the power system through distribution networks. Distribution networks require management to prevent network issues related to high penetrations of residential PV systems, which influences net demand. This requires modelling and understanding how distribution networks operate and are restricted by their physical limitations, along with how they are managed. This all has an impact on the net demand at the system level which needs to be considered. Finally, the time-varying nature of a power system with high penetrations of PV (and the displacement it causes) presents a challenge in assessing small signal stability, whilst also being unable to relate the performance of specific constant remaining modes of oscillation throughout the day. This thesis addresses the aforementioned challenges as follows: A unit commitment (UC) is utilised to model the behaviour of generators in time, enabling modelling of changes in power output in response to residential PV, determining which generators are forced offline, as well as the distribution of PFR among the synchronous generators. The UC is modified to pre-curtail power of residential PV systems for PFR, accounting for the change in net demand from pre-curtailment in the supply of PFR. Using a modified IEEE-9 bus system, the findings highlight that residential PV systems providing PFR can prevent the inefficient and costly operation of synchronous generators (providing PFR) at low power outputs. The need for representing distribution networks to assess the role of residential PV systems providing PFR is demonstrated using a realistic Australian MV-LV residential feeder (from the primary substation to individual customers). Export limits are imposed to prevent steady-state distribution problems. The findings highlight that if distribution network constraints are not considered, the level of synchronous generator displacement may be significantly over-estimated, with corresponding knock on affects for PFR requirements. The application of optimal dynamic export limits beyond managing steady-state issues in distribution networks are applied for providing PFR. A method to translate these optimal dynamic export limits to enable a reserve via droop settings for PFR is proposed. It was found that there is a significant PFR reserve available across a power system if optimal dynamic export limits are used. This PFR reserve from residential PV systems can help reduce system costs with synchronous generators no longer operating at low power just to provide PFR. The small signal stability of the system is assessed considering a time-varying net demand and corresponding response of synchronous generators, by integrating a UC with a small signal stability study. Furthermore, a method is presented whereby oscillatory modes that remain despite displacement can be tracked. Results showed that oscillatory modes can change their damping behaviour significantly in time, with oscillatory modes changing criticality (which are the least damped). This is significant given that an approach not considering a time-varying net demand may miss these findings which may lead to improper damping.
Challenges in optical wireless communication networks
Wireless local area networks (WLANs) have continually evolved during the last few decades to meet the ever-growing user demands. However, popular radio frequency technologies such as Wi-Fi are now experiencing a spectrum crunch due to a multitude of bandwidth hungry applications and limited bandwidth available in the sub-6 GHz bands. Therefore, a number of complementary technologies such as 60 GHz Wi-Fi, visible light communication and optical wireless communication have emerged to build high capacity WLANs in indoor spaces. Amongst these emerging WLAN technologies, optical wireless communication, operating in the infrared range, is becoming popular as it has access to virtually unlimited bandwidth compared to radio frequency technologies. With this huge spectrum resources, it is quite straightforward to establish wireless links over 10 Gbps with optical wireless communication. In addition to that, optical wireless communication has several advantages like not causing interference to existing WLANs, high security, and simple transceiver designs. Though the physical layer of optical wireless communication is being developed fast and brings unprecedented capabilities to WLAN landscape, upper layer protocols and architectures that are essential in harnessing the benefits of physical layer to provide multi-gigabit communication have received minimal attention so far. Therefore, this thesis explores the upper layer protocols, algorithms and architectures for optical wireless networks in homogeneous and heterogeneous settings. To this end, we first evaluate the suitability of the contention-based MAC protocol of Wi-Fi standard for optical wireless networks. The inefficiencies of the contention-based MAC protocol are highly pronounced at the higher data rates of optical wireless networks. Therefore, we introduce an improved version of the Wi-Fi MAC protocol with novel dynamic contention window tuning mechanism that can operate at multi-gigabit data rates. Second, due to the lack of availability of a simulation platform to evaluate the performance of optical wireless communication networks, we develop a simulation module for optical wireless networks in the Network Simulator-3 (ns-3) project. The proposed module can deploy optical wireless networks of different architectures and layouts, apply different scheduling algorithms, and channel models. To the best of our knowledge, this is the first multi-gigabit optical wireless network simulation module. Third, we explored novel network architectures for optical wireless networks considering the massive capacity, increased number of access points and smaller cells. Subsequently, we proposed the FLOWN (full-duplex split-plane optical wireless network) architecture for optical wireless networks. The FLOWN architecture is later generalised to all the upcoming WLANs such as Wi-Fi 6, 60 GHz Wi-Fi, and visible light communication to support homogeneous or heterogeneous WLAN deployments. It features a centralised pool of hardware and software resources, a high-capacity distribution network and advanced capabilities like full-duplex and split-plane operation. Further, delay-sensitive users can only receive guaranteed quality-of-service under contention-free MAC protocols. Therefore, most of the upcoming WLAN MAC protocols deploy hybrid versions of contention-based and contention-free MAC protocols to reap the advantages of both types. Hence, we finally introduce a contention-free MAC protocol for optical wireless networks with adaptable parameters that can be tuned to the traffic requirements of the current users. Overall, our work reported in this thesis provide simulation platform for optical wireless networks and also insight into design strategies that can be used to realise centralised multi-gigabit network architectures and MAC protocols.
Framework for Designing Multi-Access Edge Computing Network
Multi-access edge computing (MEC) is the next paradigm to support the enormous growth of diverse mobile applications that require high computational power, ultra-low latency, and high bandwidth. The user experience can be enhanced beyond the constrained resources limited by the mobile devices by offloading computation-intensive tasks to the MEC hosts. Since MEC hosts are deployed proximity to the end-users, mobility of users leads to multiple handovers in the mobile network, which leads to application migrations in the MEC network. Hence, there is a critical challenge in MEC to maintain the service continuity between the offloaded user application that is running on the MEC host and the mobile device when a user is moving from radio node to radio node. On the other hand, since a larger number of MEC hosts are going to be deployed within the radio access network, the energy efficiency of these hosts is another challenge for MEC service providers. In this thesis, we design an energy-efficient MEC network through optimizing the resource allocation and MEC hosts selection problems by considering user movements. Our findings could help mobile operators in developing a real-time network resource orchestration system to reduce network costs while increasing the number of users based on users’ mobility patterns. This thesis advances the state-of-the-art by making the following contributions: 1. Correlated user mobility model to produce user trajectories during the morning commute. 2. A utilitarian resource distribution algorithm to select suitable locations to deploy hosts and the right amount of resources for each MEC host iv 3. Energy-efficient server selection methodologies and energy-efficient virtual machine placement and migration processes to maximize the energy efficiency of the MEC hosts 4. An extended Balas-Geoffrion additive algorithm to select a suitable host based on cost minimization for MEC host selection problem 5. A shortest path-based methodology for host selection and user application migration problem to maximize the energy efficiency of the MEC network.
Assessing the Impacts of DER on Customer Voltages Using Smart Meter-Driven Low Voltage Line Models
The rapid adoption of distributed energy resources (DER) in low voltage (LV) networks is driving the need for distribution companies to assess their impacts on customer voltages in any demand/generation condition (also known as what-if analyses). Although this can be done by running conventional power flow analyses, there are two main challenges. The first one is that LV line models (three-phase LV feeder lines and single-phase service lines) are needed. However, the corresponding impedances are often poorly recorded by distribution companies. In other words, the information is incomplete or not available. The second challenge is that, if such studies are needed for operational purposes (calculations in near real-time), then implementing power flows to be run for hundreds of LV feeders can be a complex task for distribution companies. Several studies have attempted to solve the challenges of impedance estimation and simplified voltage calculations, but there are still some gaps. Given the rollout of smart meters in many places, several works have exploited smart meter measurements to estimate impedances of LV line models. However, in most cases, the three-phase nature of LV feeders (i.e. the phase couplings) is not adequately considered; and thus, such approaches cannot cater for the needs of inherently unbalanced LV networks. For the voltage calculations, existing simplified methods are based on the single-phase voltage drop equations and an additional ‘unbalanced factor’. Given that the ‘unbalanced factor’ is determined either empirically or using data-driven techniques that require large amounts of data, such methods cannot be precise or practical enough for their actual implementation by distribution companies. This thesis proposes a practical approach to determine customer voltages (in what-if analyses) using smart meter-driven LV line models that adequately capture the effects among the three phases. Firstly, impedances (three-phase LV feeder lines and single-phase service lines) are estimated using linearised voltage drop equations and a regression technique. This process exploits historical time-series measurements from smart meters and at the head of the LV feeder and assumes that the customer connectivity and customer phase connection are known. Then, using the linearised voltage drop equations and the estimated impedances, simplified calculations of customer voltages can be carried out for what-if analyses (any demand/generation condition). The proposed approach is demonstrated on realistic LV networks from Australia and the UK. Impedances are estimated considering realistic weekly historical meter measurements (i.e. active power, reactive power, and voltage magnitudes) with a 15-minute resolution (672 time steps). Voltage calculations (what-if analyses) consider weekly demand and generation profiles with 1-minute resolution (10,080 time steps). Results show a very good accuracy for most of the estimated impedances. More importantly, the calculated voltages are not only highly accurate but are also obtained much faster than with a power flow engine. Consequently, the findings suggest that the proposed approach is accurate and practical enough for its use by distribution companies.
Magnetic mirrors and plasmonic metasurfaces for mid-infrared graphene photodetectors and biosensors
Graphene is the name given to a monolayer of carbon atoms arranged in a two-dimensional honeycomb lattice. Recently, there has been much interest concerning the use of graphene in photodetectors and biosensors due to its unique electronic and optical properties. Specifically, graphene is an attractive material for developing broadband and high-speed photodetectors because of its gapless band structure and ultrafast carrier dynamics. The high spatial confinement and electrical tunability of mid-infrared (MIR) graphene plasmon have also been used for biosensors which permit the quantification and identification of biomolecule monolayers. However, the realisation of high-performance graphene photodetectors operating in the MIR is hindered by the intrinsically low optical absorption (< 2.3 %) and short carrier lifetime (sub-picosecond) of this material. In addition, the sensitivity of graphene biosensors based on plasmons is limited by the relatively small field enhancement of graphene plasmons compared to that of conventional metal plasmons. In this thesis, we present nano-optical approaches to enhance the performance of graphene-based photodetectors and biosensors operating in the MIR by employing magnetic mirrors and/or plasmonic metasurfaces. First, we propose and experimentally demonstrate a long-wave infrared device that we termed a magnetic mirror, which consists of an array of amorphous silicon cuboids on a gold film. The device is demonstrated to reflect light with high reflectance and zero phase shift. A modified multipole analysis method is devised and employed to interpret the magnetic mirror behaviour. We investigate the use of this device in a graphene photodetector application and show that the light absorption by graphene placed on top can be boosted by more than three orders of magnitude compared to the absorption that would occur were the graphene instead placed on a gold mirror. This is achieved by producing a field distribution with enhanced intensity at the device surface. Second, we design and experimentally demonstrate a mid-wave infrared polarization-independent graphene photodetector via the integration of plasmonic nanoantennas that we term Jerusalem-cross antennas (JC-antennas). The JC-antennas serve to concentrate the incident light onto graphene for strongly enhanced optical absorption, as well as to collect the photocarriers. We demonstrate mid-wave infrared detection both at room temperature and at cryogenic temperatures. Our device also shows a fast and broadband photoresponse that extends to visible and near-infrared wavelengths, thanks to the carrier collection by the JC-antennas. Last, we propose and investigate a biosensor device that combines the strong field confinement and electrical tunability of graphene plasmons with the large field enhancement of metallic nanoantennas. The device consists of an array of plasmonic nanoantennas and graphene nanoslits on a resonant substrate. Systematic electromagnetic simulations are performed to quantify the sensing performance of the proposed device. Our simulations show that the proposed device outperforms designs in which only plasmons from metallic nanoantennas or plasmons from graphene are utilized.
Development of Multispectral Image Sensors by Exploring Nanophotonics
A multispectral image camera system captures image data within specific wavelength ranges in narrow spectral bands across the electromagnetic spectrum. In the recent years, image sensors integrated with multiple optical filters with narrow spectral width have been widely used for most of the multispectral imaging across multiple applications, such as area imaging, medical detection, object identification, remote sensing and so on. There were two kinds of multispectral imaging system reported before. The first one is multispectral image cameras that combines multiple cameras mounted with optical bandpass filters and optics with different peak wavelengths and their spectral width depends on applications. The second imaging system is a single sensor based multispectral camera that integrates multiple filters (called filter mosaic) on a single image sensor. The existing filter mosaic fabrication technology disclosed so far is using multilayer coating technique and requires highly accurate alignment with micro-lithography facility. Based on this manufacturing process, each filter has to be fabricated separately with multiple steps, such as baking, exposure, development which significantly increases the fabrication complication and cost. This limits the wide use of this promising multispectral imaging in many applications. This thesis investigates the development of new low-cost single sensor based multispectral cameras using different filter mosaic technologies exploring plasmonics, multilayer coating based on heterostructured dielectrics or hybrid metal-dielectric structures. The thesis starts with an introduction, Chapter 1 presenting the filter technologies, simulation techniques and fabrication technologies. This is followed by presenting a novel technique to enhance the transmission efficiency of plasmonic colour filters based on the coaxial hole array in Chapter 2. Chapter 3 demonstrates CMY camera (cyan, magenta and yellow) using subtractive colour mixing. A colour filter mosaic made of metal-dielectric-metal nanorods is developed and then integrated on a MT9P031 CMOS image sensor to demonstrate its performance. In Chapter 4, the multispectral image camera based on a single sensor is developed using a hybrid filter mosaic integrated onto a Sony monochrome image sensor. Moreover, the multispectral imaging algorithm is used to reconstruct a colour image of a 24 - patch Macbeth Chart. Later, this image sensor was integrated with a DJI drone for the area imaging application. Chapter 5 presents new multispectral filter technologies which is polarization and incident angle independent. Lastly, Chapter 6 presents conclusions and discusses the future research directions. Appendix presents an optical bandpass filter mosaic and multispectral camera based on a mass producible filter technology with spectral width of only 17nm in the near IR wavelength and this technology is confidential and licensed as a trade secret to the University of Melbourne. Therefore, only parts of the technology is disclosed in the appendix due to a company formation (PIXsensor).
Optimal Power Flow for Active Distribution Networks: Advanced Formulations, Practical Considerations and Laboratory Demonstration
The rapid growth of renewable distributed generation (DG) has introduced unconventional challenges for distribution companies (e.g., dealing with voltage rise). To enable future DG growth, a promising alternative (to the otherwise capital-intensive and time-consuming network reinforcements) is the real-time orchestration of DG and existing network assets using advanced schemes. In this context, the operational usage of Optimal Power Flow (OPF)—an optimisation-based technique traditionally found in transmission network applications, albeit using simplified formulations—as a decision-making engine has gained tremendous interest in recent literature. Nonetheless, before such schemes can be readily integrated in the control room of distribution networks, there are several practical challenges that must be addressed. Firstly, the operational usage of OPF requires a fast and scalable formulation that can handle the size (thousands of nodes) and complexity (phase unbalances, discrete devices) of typical distribution networks. Furthermore, since the differences in device-specific characteristics in the sub-minute scale (delays, ramp rates and deadbands) may lead to coordination issues when multiple devices are being controlled simultaneously, additional adaptations are necessary to ensure OPF-based setpoints can be implemented in real-world applications. Finally, while active power curtailment is inevitable at times, since such actions has a direct impact on the return on investment for DG owners, the implications from different fairness objectives (e.g., removing disparity in renewable energy harvesting or financial benefits) as well as the trade-offs between fairness (reducing disparity) and efficiency (aggregated performance) need to be first understood. In this PhD project, the following research is carried out to address the aforementioned challenges: - A linearised, three-phase AC OPF is developed to cater for multi-voltage level distribution feeders and integer variables. Its performance is demonstrated using a realistic MV-LV residential feeder (from the primary substation down to individual connection points of 4,626 single-phase consumers) with over 4,900 nodes. - The necessary adaptations in existing device controllers and the OPF formulation are proposed, allowing network participants and assets to be successfully controlled using OPF-based schemes in an operational setting with minute-scale control actions. Particularly, the importance of the proposed adaptations in preventing short-term voltage spikes are demonstrated using a rural distribution feeder with multiple actively managed on-load tap changers and wind farms. - The implications and trade-offs from different fairness considerations are investigated using several OPF-based schemes, each considering a unique and contrasting fairness objective. The findings highlight the multi-facet nature of curtailment fairness and the importance of identifying the most appropriate objective for a given application. Furthermore, it can help operators/policymakers to make informed decisions when a portfolio of DG is to be managed. - A hardware-in-the-loop demonstration platform is built using commercially available software and hardware at the Smart Grid Lab of The University of Melbourne. This implementation extends beyond static plots and tables by introducing a rich and interactive user interface, and thus enabling a more realistic and engaging way of showcasing advanced schemes to industry.
A software-defined networking framework for IoT
In recent years, we have witnessed a shift from traditional internet networks interconnecting computers based on well-established standards, towards a pervasive network of networks that provides internet connectivity even to the smallest physical objects. This Internet of Things (IoT) network is an enabling technology to the next industrial revolution (aka Industry 4.0) where the operational technology meets the information technology or computer-based world. The creation of new IoT applications across special context such as smart cities, smart homes, smart agriculture, etc., are realised upon sensors and actuators. The networking of sensors and actuators has extended the scope of networked sensing technologies such as Wireless Sensor Networks (WSNs). However, the networking of wireless sensor devices, or sensor nodes, imposes several challenges due to their inherent resource limitations such as computational capabilities, energy, memory, and communication bandwidth. The management of the limited resources of WSNs becomes challenging and its complexity increases as the network size grows. Thus, the current state of WSNs would not be able to meet the IoT requirements unless appropriate solutions to the aforementioned challenges are found. The focus of this thesis is to investigate the challenges and benefits of Software- Defined Wireless Sensor Networks (SDWSNs) as a solution to flexible resource management and reconfiguration of WSNs. In short, the contributions of this thesis are as follows. (i) the feasibility and practicability, of SDWSNs, to perform network and resource management was demonstrated. This research work shows the ease of managing: the network topology and the transmission power of sensor nodes, using a centralized controller without any firmware modification. (ii) The previous research work is extended to an SDN-based management system for IP sensor networks and compare it with the Routing Protocol for Low-Power and Lossy Networks (RPL) to show the advantages of removing energy- and processing-intensive functions from sensor nodes. This contribution also presents, for the first time, the control overhead metric of an SDWSN, and compare it against a WSN running RPL. (iii) Next, the effects in network performance when making the WSN reprogrammable were examined by, proposing a model-based characterisation of energy consumption to calculate the energy consumed and control overhead introduced for small, large and ‘pseudo-dynamic’ SDWSNs. (iv) Last, the benefit of SDWSNs to augment the network lifetime, whilst keeping the control overhead low, was demonstrated by, proposing an energy-aware routing protocol for software-defined multihop wireless sensor networks, that seeks to prolong the overall network lifetime of the sensor network while also maintaining a high packet delivery ratio. Extensive simulation and experimental results were carried out, to validate the benefits and impacts in network performance, for all aforesaid research works. This thesis also puts forth SDWSN as a potential pathway to overcome the rigidity in management that currently exists in WSNs.
Efficient scheduling for radar resource management
Sensor scheduling and its application in radar has stemmed from the desire to achieve continued improvement in radar capability, particularly for multi-function radar technologies. Adaptive and cognitive radar represent the latest stage in radar evolution, invoking a closed-loop scheduling to replicate the perception-action cycle of cognition. Radar resources are dynamically selected to interrogate the scene before the reflected signals are analysed to inform action in future epochs. Whilst many authors have proposed systems for adaptive and cognitive sensing, the signal processing and computing aspects of modern radar make closed-loop scheduling schemes challenging to realise on the time scales of which radar operates. This thesis is focused on the implementation aspect of the sensor scheduling problem for radar. The work is presented in three parts that investigate problems related to this issue. In the first part, we consider linear frequency modulation (LFM) range-Doppler coupling in radar and the associated range bias in measurements using this waveform. A maximum likelihood based estimator that exploits this error is proposed to jointly estimate target range and range-rate using a train of diverse LFM pulses. Efficient methods to select diverse pulse trains based on established adaptive radar waveform cost functions are provided. Pipeline computing architectures provided by high bandwidth solutions comprising of multiple parallel processors are well suited to complex independent processing applications. Pipeline processing for radar has been previously utilised for computationally intensive applications such as space-time adaptive processing. In the second problem, we investigate the time costs associated radar signal processing and closed-loop sensor scheduling for a knowledge based diversity scheme. A universal cost for the processing activities is defined to recognising the delay and subsequent repercussion it can have on the feedback cycle of an adaptive system is investigated. We propose two alternate parallel processing architectures that alleviate the narrow time burden between measurement epochs for a sequential feedback loop. The performance degradation of the proposed architectures are investigated in an adaptive radar scenario for various time costs. Clutter represents the unwanted signals reflected from a radar scene. Efficient clutter modelling is important in implementation of adaptive radar as to minimise delay in the target detection process. In the open ocean, sea clutter can be represented using a compound-Gaussian clutter model. In the third problem, we propose a parsimonious parametric model for sea clutter texture that is suitable for high-resolution radar backscatter at low grazing angles in the open ocean. By relating the clutter by to its physical source, we exploit spatio-temporal relationships to propose an efficient algorithm for the estimation of the spectral components for the parametric texture model. Validation is performed by comparing the predictive fit for our estimator with a series of temporal estimators and a non-parametric estimator using measured sea clutter data from the Atlantic Ocean.
Non-invasive convulsive seizure assessment using wearable accelerometer device
Epilepsy can be characterized by recurrent and unprovoked episodes of dysfunctional neuronal activity in time coupled with a change in behavior and altered state of consciousness. Epilepsy is one of the most prevalent neurological disorders. The prevalence of epilepsy is approximately 50 million worldwide. One of the major disabilities attributed to epilepsy is the unpredictability of epileptic seizures (ES). A person cannot call for help during a seizure, often suffering injuries due to falls, burns, tongue biting, etc.; thus, independent living is impaired. A more serious consequence is epilepsy-associated mortality. The increased mortality in epilepsy is attributed mainly to direct causes, i.e., accidental death (drowning, motor vehicle accidents, serious head injuries) and sudden unexpected death in epilepsy (SUDEP). Evidence suggests that appropriate and timely intervention following a seizure can reduce the risk of epilepsy-associated injuries and mortality. Another class of seizures known as psychogenic non-epileptic seizures (PNES) are involuntary events that share diagnostic similarities with generalized epileptic tonic-clonic seizures (GTCS). PNES events have a causal association to sporadic attacks resulting from autonomic malfunction often linked to major psychosocial distress. PNES has a prevalence of 1-33 cases per 100,000, accounting for 5-20% of patients thought to have epilepsy. Patients with PNES require treatment tailored to address the associated psychosis. There is the potential for severe harm from the adverse effects of the anti-epileptic drugs (AEDs) prescribed to patients with PNES, as well as increased risk of morbidity and mortality due to intubation from prolonged seizures. In this thesis, we describe the development of a wrist-worn accelerometer (ACM)-based system for the automated detection and classification of seizures. The first section of this thesis describes the development of a wireless remote monitoring system based on a single wrist-worn ACM sensor. A novel seizure detection algorithm was proposed and validated on 5576 h of ACM data recorded from 79 patients admitted to the Epilepsy Monitoring Unit at Royal Melbourne Hospital, Melbourne, Australia. The wearable ACM sensor achieved high seizure detection sensitivity and specificity that correlated with the gold-standard diagnosis. The study showed that a single wrist-worn ACM sensor can efficiently detect different types of convulsive seizures and can differentiate seizures from activities of daily living. In addition, it demonstrated the feasibility of a unobtrusive system for continuous remote monitoring and assessment of patients with epilepsy. The second section describes novel features based on capturing the temporal variations in rhythmic limb movement during a seizure, to differentiate GTCS from convulsive PNES. We observed that the manifestation of GTCS can be characterized by an onset that involves increased muscle tone, usually accompanied by irregular and asymmetric jerking, followed by tremulousness that translates into clonic activity before subsiding gradually. By contrast, no clear distinction could be seen between different phases of convulsive PNES events. Based on these observations, we proposed two new indexes that capture the onset and subsiding behavior of an event: (1) tonic index (TI), and (2) dispersion decay index (DDI). The study showed that the TI and DDI can differentiate GTCS from convulsive PNES. Importantly, the study showed that different phases of a seizure contain clues for differential diagnosis of PNES, which is an expensive clinical procedure. In addition, these results highlight the feasibility of wearable ACM based device for outpatient diagnosis of convulsive PNES. Despite rapid technological advancement in surgical techniques and discovery of anti-epileptic medication one-third of the epileptic patients are forced to live with seizures. The unpredictability and risk of injury (falls, head injuries, etc.) associated with seizures are the major contributors to poor quality of life (QOL), requiring round-the-clock monitoring by caregivers. Therefore, in the third section of the thesis we present a novel algorithm for real-time onset detection of GTCS events using a single wrist-worn ACM-based device. The algorithm was tested on 5576 h of ACM data from 79 patients and detected 21 of 21 (sensitivity: 100%, FAR: 0.76/24 h) GTCS events from 12 patients at 7 s from onset. Taking into consideration the challenges to real-time onset detection of seizures, it is anticipated that the proposed wrist-worn ACM-based system would aid efficient real-time remote monitoring of epileptic patients, improving their (QOL) and acting as a seizure triggered alarm and therapeutic system.
Strategic Deployment of Artificial Intelligence-Enhanced Cloudlets for Low-latency Human-to-Machine Applications
The genesis of mobile cloud computing technology is one of the most significant technical advents of the last decade which can be seen as a marriage between cloud computing and mobile computing technologies. This technical paradigm brings mobile users, telecommunication network operators, and cloud service providers to a common playground, thus providing business opportunities for network operators and cloud service providers. The extension of this facility towards access networks by aggregation of edge-intelligence nodes like cloudlets is one more step forward. A cloudlet is a ``data centre in a box" with enhanced mobility support to bring the cloud closer to mobile users and uses virtual machine abstraction for dynamic resource allocation to trusted mobile users, isolate untrusted mobile users, and support a wide variety of applications without being limited by their process structures, programming languages, or operating systems. To fulfil the ravenous demand for computational resources entangled with the crisp latency requirements of various computationally intensive and mission-critical applications related to augmented reality, autonomous transport, cognitive assistance, and Tactile Internet, installation of cloudlets near access seems to be a very promising solution because of its support for wide geographical network distribution, low latency, mobility and heterogeneity. Finding the optimal cost of cloudlet deployment over urban, suburban, and rural deployment areas with an existing access network, essentially implies finding the optimal placement locations of the cloudlets over the entire deployment area and the optimal amount of computational and storage resources per cloudlet. Technically, this research question leads to an assignment problem, where we need to find the optimal interconnections between mobile devices and cloudlets. In this research, we propose a hybrid cost-optimal cloudlet placement framework over existing fibre-wireless access networks based on mixed-integer non-linear programming. We primarily focus on static cloudlet network planning and placement, i.e., identification of exact optimal cloudlet placement locations over urban, suburban and rural deployment scenarios to provide guidance on the installation cost and assess the workload distribution among different cloudlets and the percentage of incremental energy arising from the presence of cloudlets in the fibre-wireless access networks. Howbeit, we observed that mixed-integer programming based frameworks suffer from scalability issues with large networks and become completely useless when the network data is unavailable. Thus, to overcome this issue, we design analytical frameworks that can provide a quick first-hand estimation of cloudlet deployment cost depending on mobile user density, network architecture, and QoS requirements. We verify that the results produced by this method can be considered as tight lower bounds to that produced by integer programming based frameworks for most practical scenarios. We further perform a parametric analysis to understand the dependence of cloudlet deployment cost on various network parameters. However, depending on the mobility pattern and dynamically varying computational requirements of associated mobile devices, cloudlets at different parts of the network become either overloaded or under-loaded. Thus, we propose an economic and non-cooperative load balancing game for low-latency applications among neighbouring cloudlets, from same as well as different service providers. While addressing load balancing problems, most authors usually stress on minimising the end-to-end latency and do not consider the heterogeneity of neighbouring cloudlets. Nonetheless, in practice, if the job requests are processed within their requested QoS latency target, mobile users should be satisfied. Therefore, instead of formulating a conventional latency minimisation game, we propose a novel utility maximisation game to capture the multi-party economic interaction among heterogeneous neighbouring cloudlets. In this load balancing game, the participating cloudlets achieve their maximum utility when the end-to-end latency is equal to the QoS latency target. With this game formulation, each cloudlet is always interested in receiving some extra job requests and the associated incentives from their neighbouring cloudlets to push their utility towards the maximum point. To implement this game-theoretic load balancing framework, firstly, we propose a centralised mechanism where all the competing cloudlets send their predicted job request arrival rates to a neutral mediator. The mediator computes the Nash equilibrium load balancing strategies for the cloudlets and broadcasts to them before the actual job request arrival. This centralised mechanism also ensures that competing cloudlets are truthful while revealing private information e.g., total incoming job requests. Secondly, we propose a continuous-action reinforcement learning automata-based scheme, which allows each cloudlet to independently compute the Nash equilibrium in a completely distributed network setting. We critically study the convergence properties of the designed learning algorithm, scaffolding our understanding of the underlying load balancing game for faster convergence, and study the impacts of exploration and exploitation on learning accuracy. After investigating the cloudlet placement and load balancing problems, we investigate the role of edge-intelligence servers like cloudlets in deploying low-latency human-to-machine applications like teleoperation, immersive virtual/augmented reality, and industrial automotive control over long-distance access networks. Such applications are being realised through Tactile Internet that allows users to control remote things and involve the bi-directional transmission of video, audio, and haptic data. However, the end-to-end propagation latency presents a stubborn bottleneck, which can be alleviated by using various artificial intelligence-based application layer and network layer prediction algorithms, e.g., forecasting and preempting haptic feedback transmission. To gain proper insights, we study the experimental data on traffic characteristics of control signals and haptic feedback samples obtained through virtual reality-based human-to-machine teleoperation. Moreover, we propose the installation of edge-intelligence servers between master and slave devices to implement the preemption of haptic feedback from control signals. Harnessing virtual reality-based teleoperation experiments, we further propose a two-stage artificial intelligence-based module for forecasting haptic feedback samples. The first-stage unit is a supervised binary classifier that detects if haptic sample forecasting is necessary and the second-stage unit is a guided reinforcement learning unit that ensures haptic feedback samples are forecasted accurately when different types of material are present. Furthermore, by evaluating analytical expressions, we show the feasibility of deploying remote human-to-machine teleoperation over fibre backhaul by using our proposed artificial intelligence-based module, even under heavy traffic intensity.
Automating Computed Tomography Analysis for Early Diagnosis of Neurological Diseases
Neurological diseases are diseases of the nervous system that occur due to structural or biochemical abnormalities in the brain and nervous system. A diverse set of neurological diseases with varied symptoms makes it complicated to diagnose them with a standard protocol. Nevertheless, medical imaging can play a significant role in their early diagnosis by providing an accurate visualisation of internal body structures. However, analysis of the medical images mostly involves significant human intervention in complex disease cases. This process is not only time-intensive, but also laborious, and exhibits inter- and intra-observer variances. To this end, this study contributes to automating the early diagnosis of neurological diseases from computed tomography images. The first contribution of the thesis involves early diagnosis of cerebral aneurysms from computed tomography angiograms. A large-scale computed tomography angiograms dataset is constructed to investigate the automated diagnosis of unruptured cerebral aneurysms. A novel convolutional neural network architecture is proposed and trained on the dataset to identify aneurysm voxels from the images and subsequently, diagnose its presence in the given image scan. The proposed approach achieves a sensitivity of 92% in diagnosing aneurysms and a dice score of 65.2% in their localisation, thus demonstrating the efficacy of the proposed work. The second focus is on Parkinson’s disease, a neurological disease affecting the control of body movements. It can cause significant speech impairment early its course. Therefore, analysing the abnormalities in vocal fold movements during phonation can be a useful indicator for early signs. Computed tomography is an efficient imaging modality that captures dynamic vocal fold movements with a good spatial and temporal resolution. Therefore, it allows for a direct assessment of the movements of vocal folds and associated structures. A large-scale image dataset is constructed by capturing computed tomography scans of the neck during vocalisation period. First, a basic image processing-based approach is proposed that helps to explore and identify clinically useful feature points from arytenoid cartilages supporting the vocal fold movements. Further, convolutional neural network-based object detector is trained to fully localise the arytenoid cartilages. Inter arytenoid distance feature is then extracted to demonstrate its utility in differentiating Parkinson’s patients from healthy controls. In this final part of the contribution, novel machine learning interpretability techniques based on canonical correlation analysis, are proposed that assist in interpreting the representations learned by convolutional neural networks designed for the specific medical image analysis tasks. A set of novel two-dimensional multiset canonical correlation analysis algorithms are proposed that effectively capture the linear relationships between learned feature representations within and between neural networks. Results are presented by employing the proposed interpretability techniques to analyse the learned representations of neural networks trained to segment cerebral aneurysms from computed tomography angiograms. In summary, the thesis contributes to automating the analysis of computed tomography images for early detection of neurological diseases.