Electrical and Electronic Engineering - Theses
Now showing items 1-12 of 261
Optimal Power Flow for Active Distribution Networks: Advanced Formulations, Practical Considerations and Laboratory Demonstration
The rapid growth of renewable distributed generation (DG) has introduced unconventional challenges for distribution companies (e.g., dealing with voltage rise). To enable future DG growth, a promising alternative (to the otherwise capital-intensive and time-consuming network reinforcements) is the real-time orchestration of DG and existing network assets using advanced schemes. In this context, the operational usage of Optimal Power Flow (OPF)—an optimisation-based technique traditionally found in transmission network applications, albeit using simplified formulations—as a decision-making engine has gained tremendous interest in recent literature. Nonetheless, before such schemes can be readily integrated in the control room of distribution networks, there are several practical challenges that must be addressed. Firstly, the operational usage of OPF requires a fast and scalable formulation that can handle the size (thousands of nodes) and complexity (phase unbalances, discrete devices) of typical distribution networks. Furthermore, since the differences in device-specific characteristics in the sub-minute scale (delays, ramp rates and deadbands) may lead to coordination issues when multiple devices are being controlled simultaneously, additional adaptations are necessary to ensure OPF-based setpoints can be implemented in real-world applications. Finally, while active power curtailment is inevitable at times, since such actions has a direct impact on the return on investment for DG owners, the implications from different fairness objectives (e.g., removing disparity in renewable energy harvesting or financial benefits) as well as the trade-offs between fairness (reducing disparity) and efficiency (aggregated performance) need to be first understood. In this PhD project, the following research is carried out to address the aforementioned challenges: - A linearised, three-phase AC OPF is developed to cater for multi-voltage level distribution feeders and integer variables. Its performance is demonstrated using a realistic MV-LV residential feeder (from the primary substation down to individual connection points of 4,626 single-phase consumers) with over 4,900 nodes. - The necessary adaptations in existing device controllers and the OPF formulation are proposed, allowing network participants and assets to be successfully controlled using OPF-based schemes in an operational setting with minute-scale control actions. Particularly, the importance of the proposed adaptations in preventing short-term voltage spikes are demonstrated using a rural distribution feeder with multiple actively managed on-load tap changers and wind farms. - The implications and trade-offs from different fairness considerations are investigated using several OPF-based schemes, each considering a unique and contrasting fairness objective. The findings highlight the multi-facet nature of curtailment fairness and the importance of identifying the most appropriate objective for a given application. Furthermore, it can help operators/policymakers to make informed decisions when a portfolio of DG is to be managed. - A hardware-in-the-loop demonstration platform is built using commercially available software and hardware at the Smart Grid Lab of The University of Melbourne. This implementation extends beyond static plots and tables by introducing a rich and interactive user interface, and thus enabling a more realistic and engaging way of showcasing advanced schemes to industry.
A software-defined networking framework for IoT
In recent years, we have witnessed a shift from traditional internet networks interconnecting computers based on well-established standards, towards a pervasive network of networks that provides internet connectivity even to the smallest physical objects. This Internet of Things (IoT) network is an enabling technology to the next industrial revolution (aka Industry 4.0) where the operational technology meets the information technology or computer-based world. The creation of new IoT applications across special context such as smart cities, smart homes, smart agriculture, etc., are realised upon sensors and actuators. The networking of sensors and actuators has extended the scope of networked sensing technologies such as Wireless Sensor Networks (WSNs). However, the networking of wireless sensor devices, or sensor nodes, imposes several challenges due to their inherent resource limitations such as computational capabilities, energy, memory, and communication bandwidth. The management of the limited resources of WSNs becomes challenging and its complexity increases as the network size grows. Thus, the current state of WSNs would not be able to meet the IoT requirements unless appropriate solutions to the aforementioned challenges are found. The focus of this thesis is to investigate the challenges and benefits of Software- Defined Wireless Sensor Networks (SDWSNs) as a solution to flexible resource management and reconfiguration of WSNs. In short, the contributions of this thesis are as follows. (i) the feasibility and practicability, of SDWSNs, to perform network and resource management was demonstrated. This research work shows the ease of managing: the network topology and the transmission power of sensor nodes, using a centralized controller without any firmware modification. (ii) The previous research work is extended to an SDN-based management system for IP sensor networks and compare it with the Routing Protocol for Low-Power and Lossy Networks (RPL) to show the advantages of removing energy- and processing-intensive functions from sensor nodes. This contribution also presents, for the first time, the control overhead metric of an SDWSN, and compare it against a WSN running RPL. (iii) Next, the effects in network performance when making the WSN reprogrammable were examined by, proposing a model-based characterisation of energy consumption to calculate the energy consumed and control overhead introduced for small, large and ‘pseudo-dynamic’ SDWSNs. (iv) Last, the benefit of SDWSNs to augment the network lifetime, whilst keeping the control overhead low, was demonstrated by, proposing an energy-aware routing protocol for software-defined multihop wireless sensor networks, that seeks to prolong the overall network lifetime of the sensor network while also maintaining a high packet delivery ratio. Extensive simulation and experimental results were carried out, to validate the benefits and impacts in network performance, for all aforesaid research works. This thesis also puts forth SDWSN as a potential pathway to overcome the rigidity in management that currently exists in WSNs.
Efficient scheduling for radar resource management
Sensor scheduling and its application in radar has stemmed from the desire to achieve continued improvement in radar capability, particularly for multi-function radar technologies. Adaptive and cognitive radar represent the latest stage in radar evolution, invoking a closed-loop scheduling to replicate the perception-action cycle of cognition. Radar resources are dynamically selected to interrogate the scene before the reflected signals are analysed to inform action in future epochs. Whilst many authors have proposed systems for adaptive and cognitive sensing, the signal processing and computing aspects of modern radar make closed-loop scheduling schemes challenging to realise on the time scales of which radar operates. This thesis is focused on the implementation aspect of the sensor scheduling problem for radar. The work is presented in three parts that investigate problems related to this issue. In the first part, we consider linear frequency modulation (LFM) range-Doppler coupling in radar and the associated range bias in measurements using this waveform. A maximum likelihood based estimator that exploits this error is proposed to jointly estimate target range and range-rate using a train of diverse LFM pulses. Efficient methods to select diverse pulse trains based on established adaptive radar waveform cost functions are provided. Pipeline computing architectures provided by high bandwidth solutions comprising of multiple parallel processors are well suited to complex independent processing applications. Pipeline processing for radar has been previously utilised for computationally intensive applications such as space-time adaptive processing. In the second problem, we investigate the time costs associated radar signal processing and closed-loop sensor scheduling for a knowledge based diversity scheme. A universal cost for the processing activities is defined to recognising the delay and subsequent repercussion it can have on the feedback cycle of an adaptive system is investigated. We propose two alternate parallel processing architectures that alleviate the narrow time burden between measurement epochs for a sequential feedback loop. The performance degradation of the proposed architectures are investigated in an adaptive radar scenario for various time costs. Clutter represents the unwanted signals reflected from a radar scene. Efficient clutter modelling is important in implementation of adaptive radar as to minimise delay in the target detection process. In the open ocean, sea clutter can be represented using a compound-Gaussian clutter model. In the third problem, we propose a parsimonious parametric model for sea clutter texture that is suitable for high-resolution radar backscatter at low grazing angles in the open ocean. By relating the clutter by to its physical source, we exploit spatio-temporal relationships to propose an efficient algorithm for the estimation of the spectral components for the parametric texture model. Validation is performed by comparing the predictive fit for our estimator with a series of temporal estimators and a non-parametric estimator using measured sea clutter data from the Atlantic Ocean.
Non-invasive convulsive seizure assessment using wearable accelerometer device
Epilepsy can be characterized by recurrent and unprovoked episodes of dysfunctional neuronal activity in time coupled with a change in behavior and altered state of consciousness. Epilepsy is one of the most prevalent neurological disorders. The prevalence of epilepsy is approximately 50 million worldwide. One of the major disabilities attributed to epilepsy is the unpredictability of epileptic seizures (ES). A person cannot call for help during a seizure, often suffering injuries due to falls, burns, tongue biting, etc.; thus, independent living is impaired. A more serious consequence is epilepsy-associated mortality. The increased mortality in epilepsy is attributed mainly to direct causes, i.e., accidental death (drowning, motor vehicle accidents, serious head injuries) and sudden unexpected death in epilepsy (SUDEP). Evidence suggests that appropriate and timely intervention following a seizure can reduce the risk of epilepsy-associated injuries and mortality. Another class of seizures known as psychogenic non-epileptic seizures (PNES) are involuntary events that share diagnostic similarities with generalized epileptic tonic-clonic seizures (GTCS). PNES events have a causal association to sporadic attacks resulting from autonomic malfunction often linked to major psychosocial distress. PNES has a prevalence of 1-33 cases per 100,000, accounting for 5-20% of patients thought to have epilepsy. Patients with PNES require treatment tailored to address the associated psychosis. There is the potential for severe harm from the adverse effects of the anti-epileptic drugs (AEDs) prescribed to patients with PNES, as well as increased risk of morbidity and mortality due to intubation from prolonged seizures. In this thesis, we describe the development of a wrist-worn accelerometer (ACM)-based system for the automated detection and classification of seizures. The first section of this thesis describes the development of a wireless remote monitoring system based on a single wrist-worn ACM sensor. A novel seizure detection algorithm was proposed and validated on 5576 h of ACM data recorded from 79 patients admitted to the Epilepsy Monitoring Unit at Royal Melbourne Hospital, Melbourne, Australia. The wearable ACM sensor achieved high seizure detection sensitivity and specificity that correlated with the gold-standard diagnosis. The study showed that a single wrist-worn ACM sensor can efficiently detect different types of convulsive seizures and can differentiate seizures from activities of daily living. In addition, it demonstrated the feasibility of a unobtrusive system for continuous remote monitoring and assessment of patients with epilepsy. The second section describes novel features based on capturing the temporal variations in rhythmic limb movement during a seizure, to differentiate GTCS from convulsive PNES. We observed that the manifestation of GTCS can be characterized by an onset that involves increased muscle tone, usually accompanied by irregular and asymmetric jerking, followed by tremulousness that translates into clonic activity before subsiding gradually. By contrast, no clear distinction could be seen between different phases of convulsive PNES events. Based on these observations, we proposed two new indexes that capture the onset and subsiding behavior of an event: (1) tonic index (TI), and (2) dispersion decay index (DDI). The study showed that the TI and DDI can differentiate GTCS from convulsive PNES. Importantly, the study showed that different phases of a seizure contain clues for differential diagnosis of PNES, which is an expensive clinical procedure. In addition, these results highlight the feasibility of wearable ACM based device for outpatient diagnosis of convulsive PNES. Despite rapid technological advancement in surgical techniques and discovery of anti-epileptic medication one-third of the epileptic patients are forced to live with seizures. The unpredictability and risk of injury (falls, head injuries, etc.) associated with seizures are the major contributors to poor quality of life (QOL), requiring round-the-clock monitoring by caregivers. Therefore, in the third section of the thesis we present a novel algorithm for real-time onset detection of GTCS events using a single wrist-worn ACM-based device. The algorithm was tested on 5576 h of ACM data from 79 patients and detected 21 of 21 (sensitivity: 100%, FAR: 0.76/24 h) GTCS events from 12 patients at 7 s from onset. Taking into consideration the challenges to real-time onset detection of seizures, it is anticipated that the proposed wrist-worn ACM-based system would aid efficient real-time remote monitoring of epileptic patients, improving their (QOL) and acting as a seizure triggered alarm and therapeutic system.
Strategic Deployment of Artificial Intelligence-Enhanced Cloudlets for Low-latency Human-to-Machine Applications
The genesis of mobile cloud computing technology is one of the most significant technical advents of the last decade which can be seen as a marriage between cloud computing and mobile computing technologies. This technical paradigm brings mobile users, telecommunication network operators, and cloud service providers to a common playground, thus providing business opportunities for network operators and cloud service providers. The extension of this facility towards access networks by aggregation of edge-intelligence nodes like cloudlets is one more step forward. A cloudlet is a ``data centre in a box" with enhanced mobility support to bring the cloud closer to mobile users and uses virtual machine abstraction for dynamic resource allocation to trusted mobile users, isolate untrusted mobile users, and support a wide variety of applications without being limited by their process structures, programming languages, or operating systems. To fulfil the ravenous demand for computational resources entangled with the crisp latency requirements of various computationally intensive and mission-critical applications related to augmented reality, autonomous transport, cognitive assistance, and Tactile Internet, installation of cloudlets near access seems to be a very promising solution because of its support for wide geographical network distribution, low latency, mobility and heterogeneity. Finding the optimal cost of cloudlet deployment over urban, suburban, and rural deployment areas with an existing access network, essentially implies finding the optimal placement locations of the cloudlets over the entire deployment area and the optimal amount of computational and storage resources per cloudlet. Technically, this research question leads to an assignment problem, where we need to find the optimal interconnections between mobile devices and cloudlets. In this research, we propose a hybrid cost-optimal cloudlet placement framework over existing fibre-wireless access networks based on mixed-integer non-linear programming. We primarily focus on static cloudlet network planning and placement, i.e., identification of exact optimal cloudlet placement locations over urban, suburban and rural deployment scenarios to provide guidance on the installation cost and assess the workload distribution among different cloudlets and the percentage of incremental energy arising from the presence of cloudlets in the fibre-wireless access networks. Howbeit, we observed that mixed-integer programming based frameworks suffer from scalability issues with large networks and become completely useless when the network data is unavailable. Thus, to overcome this issue, we design analytical frameworks that can provide a quick first-hand estimation of cloudlet deployment cost depending on mobile user density, network architecture, and QoS requirements. We verify that the results produced by this method can be considered as tight lower bounds to that produced by integer programming based frameworks for most practical scenarios. We further perform a parametric analysis to understand the dependence of cloudlet deployment cost on various network parameters. However, depending on the mobility pattern and dynamically varying computational requirements of associated mobile devices, cloudlets at different parts of the network become either overloaded or under-loaded. Thus, we propose an economic and non-cooperative load balancing game for low-latency applications among neighbouring cloudlets, from same as well as different service providers. While addressing load balancing problems, most authors usually stress on minimising the end-to-end latency and do not consider the heterogeneity of neighbouring cloudlets. Nonetheless, in practice, if the job requests are processed within their requested QoS latency target, mobile users should be satisfied. Therefore, instead of formulating a conventional latency minimisation game, we propose a novel utility maximisation game to capture the multi-party economic interaction among heterogeneous neighbouring cloudlets. In this load balancing game, the participating cloudlets achieve their maximum utility when the end-to-end latency is equal to the QoS latency target. With this game formulation, each cloudlet is always interested in receiving some extra job requests and the associated incentives from their neighbouring cloudlets to push their utility towards the maximum point. To implement this game-theoretic load balancing framework, firstly, we propose a centralised mechanism where all the competing cloudlets send their predicted job request arrival rates to a neutral mediator. The mediator computes the Nash equilibrium load balancing strategies for the cloudlets and broadcasts to them before the actual job request arrival. This centralised mechanism also ensures that competing cloudlets are truthful while revealing private information e.g., total incoming job requests. Secondly, we propose a continuous-action reinforcement learning automata-based scheme, which allows each cloudlet to independently compute the Nash equilibrium in a completely distributed network setting. We critically study the convergence properties of the designed learning algorithm, scaffolding our understanding of the underlying load balancing game for faster convergence, and study the impacts of exploration and exploitation on learning accuracy. After investigating the cloudlet placement and load balancing problems, we investigate the role of edge-intelligence servers like cloudlets in deploying low-latency human-to-machine applications like teleoperation, immersive virtual/augmented reality, and industrial automotive control over long-distance access networks. Such applications are being realised through Tactile Internet that allows users to control remote things and involve the bi-directional transmission of video, audio, and haptic data. However, the end-to-end propagation latency presents a stubborn bottleneck, which can be alleviated by using various artificial intelligence-based application layer and network layer prediction algorithms, e.g., forecasting and preempting haptic feedback transmission. To gain proper insights, we study the experimental data on traffic characteristics of control signals and haptic feedback samples obtained through virtual reality-based human-to-machine teleoperation. Moreover, we propose the installation of edge-intelligence servers between master and slave devices to implement the preemption of haptic feedback from control signals. Harnessing virtual reality-based teleoperation experiments, we further propose a two-stage artificial intelligence-based module for forecasting haptic feedback samples. The first-stage unit is a supervised binary classifier that detects if haptic sample forecasting is necessary and the second-stage unit is a guided reinforcement learning unit that ensures haptic feedback samples are forecasted accurately when different types of material are present. Furthermore, by evaluating analytical expressions, we show the feasibility of deploying remote human-to-machine teleoperation over fibre backhaul by using our proposed artificial intelligence-based module, even under heavy traffic intensity.
Automating Computed Tomography Analysis for Early Diagnosis of Neurological Diseases
Neurological diseases are diseases of the nervous system that occur due to structural or biochemical abnormalities in the brain and nervous system. A diverse set of neurological diseases with varied symptoms makes it complicated to diagnose them with a standard protocol. Nevertheless, medical imaging can play a significant role in their early diagnosis by providing an accurate visualisation of internal body structures. However, analysis of the medical images mostly involves significant human intervention in complex disease cases. This process is not only time-intensive, but also laborious, and exhibits inter- and intra-observer variances. To this end, this study contributes to automating the early diagnosis of neurological diseases from computed tomography images. The first contribution of the thesis involves early diagnosis of cerebral aneurysms from computed tomography angiograms. A large-scale computed tomography angiograms dataset is constructed to investigate the automated diagnosis of unruptured cerebral aneurysms. A novel convolutional neural network architecture is proposed and trained on the dataset to identify aneurysm voxels from the images and subsequently, diagnose its presence in the given image scan. The proposed approach achieves a sensitivity of 92% in diagnosing aneurysms and a dice score of 65.2% in their localisation, thus demonstrating the efficacy of the proposed work. The second focus is on Parkinson’s disease, a neurological disease affecting the control of body movements. It can cause significant speech impairment early its course. Therefore, analysing the abnormalities in vocal fold movements during phonation can be a useful indicator for early signs. Computed tomography is an efficient imaging modality that captures dynamic vocal fold movements with a good spatial and temporal resolution. Therefore, it allows for a direct assessment of the movements of vocal folds and associated structures. A large-scale image dataset is constructed by capturing computed tomography scans of the neck during vocalisation period. First, a basic image processing-based approach is proposed that helps to explore and identify clinically useful feature points from arytenoid cartilages supporting the vocal fold movements. Further, convolutional neural network-based object detector is trained to fully localise the arytenoid cartilages. Inter arytenoid distance feature is then extracted to demonstrate its utility in differentiating Parkinson’s patients from healthy controls. In this final part of the contribution, novel machine learning interpretability techniques based on canonical correlation analysis, are proposed that assist in interpreting the representations learned by convolutional neural networks designed for the specific medical image analysis tasks. A set of novel two-dimensional multiset canonical correlation analysis algorithms are proposed that effectively capture the linear relationships between learned feature representations within and between neural networks. Results are presented by employing the proposed interpretability techniques to analyse the learned representations of neural networks trained to segment cerebral aneurysms from computed tomography angiograms. In summary, the thesis contributes to automating the analysis of computed tomography images for early detection of neurological diseases.
Risk Management Frameworks and Methodologies for Modern and Resilient Power Systems Planning Using Machine Learning Techniques
Renewable energy technologies, customer behaviour, and new regulations are key factors contributing to a change in power generation paradigm that is becoming increasingly decentralized and embedded in the distribution network. The new paradigm, together with strong opportunities, is bringing challenges for power networks that must be adequately anticipated and planned to maintain the security and reliability of the power supply. This research addresses two key challenges for developed power networks and one challenge for developing networks located in countries vulnerable to extreme weather events. For developed power networks, this research formulated risk assessment models based on Artificial Intelligence techniques that enable power system planners to analyse vast numbers of scenarios and assess the impact of voltage excursions and reverse power flows as a result of elevated penetration of distributed energy resources. The novelty of the work is derived from the scalability of the proposed models and its end-to-end approach that includes financial modelling of the impacts. For the developing network, this research developed one risk-based methodology to assess resilience to extreme weather events that is linked to power system planning. The novelty of the proposed methodology is derived from the problem formulation that explicitly considers both the technical power system resilience and the social community energy resilience in quantifiable terms that are linked to power system planning via an optimization problem.
Probabilistic Energy Management Systems in PV-Rich Communities
Increasing popularity of renewable and Distributed Energy Resources (DER) and introduction of smart meters are changing the way electricity distribution grids have been operated. The stochastic nature of renewable sources adds new challenges to distribution grid operations. Communities, which are defined as groups of individual customers that utilise renewable energy sources, are especially impacted by these challenges due to their lack of scale and know-how. In this thesis, we focus on PV-rich communities that have a number of end-users equipped with rooftop photovoltaic (PV) panels without any local storage. For such PV-rich communities, it would be beneficial to model and analyse the statistical properties of DERs and their demand. Historical data can help understand the stochastic behaviour of community DER and demand, and model them as random sequences. These random sequences are used as a basis for optimal decision-making on financial contracts between communities and energy generators. Unlike stochastic optimisation, forecasting, and the Monte Carlo simulation, our methodology enables PV-rich communities to conduct long-term planning, spot-market exposure risk analysis, fine-tuning power purchase agreements, and a good understanding of statistical properties of distribution networks utilising PV systems. Our approach benefits from data science and uses models and existing data in a computationally efficient manner. With the help of our proposed model-based tool, communities are able to plan their long term financial agreements without conducting a high number of simulations.
Supporting Latency-Critical Applications by Wireless and Mobile Networks: MAC Layer Approaches
The Internet has evolved a long way from transporting basic web data to transporting traffic instigated by new emerging services that are available today. One such service is wireless-based remote human-to-machine/human interaction, in which touch and actuation related information is delivered over the network. Such a service is referred to as the Tactile Internet, and it requires a network, integrating wired and wireless communication technologies, to provide high reliability and ultra-low end-to-end latency in the millisecond range. To date, some wired access networks can partly meet the requirement of the Tactile Internet, while currently deployed wireless access networks may not be able to fulfil these needs. Specifically, uplink latency resulting from the signalling process and queueing in the MAC layer becomes the bottleneck of applying wireless networks to the Tactile Internet. Today, wireless local area network (WLAN) and mobile cellular network have become the most dominant wireless solutions to connecting vast majority of users around the globe, due to their characteristics such as ease of deployment, lower cost and power consumption. Network designers, however, need a rethink of the underlying signalling mechanism such that latency of wireless transmissions can be significantly reduced. The main objective of this thesis is to propose latency-reduction algorithms and transmission schemes for WLAN and mobile cellular network and study the feasibility of applying the proposed algorithms and schemes to the Tactile Internet. For this purpose, we have selected the HCF Controlled Channel Access (HCCA) of WLAN and Semi-Persistent Scheduling (SPS) scheme of LTE network as our candidates. Unlike other WLAN MAC protocols which are prone to packet collisions, HCCA operates a polling mechanism with guaranteed airtime for some certain admitted users. We studied the performance of the HCCA by analytically deriving a closed-form expression for its average uplink latency, defined by various network and timing parameters. The derived expression was proved accurate, as compared with discrete-event simulations. We then conducted global sensitivity analyses to further understand the implications of these parameters on the latency performance. Lastly, we proposed a strategic parameter selection algorithm to fine-tune network parameters of WLAN such that the average latency of HCCA can be reduced effectively. Our analytical and simulative studies of the proposed algorithm considered small and constant payload size, in order to match the traffic characteristics of the Tactile Internet. Results showed that given proper network settings, a WLAN implementing HCCA is able to provide satisfying latency performance with a sub-millisecond latency. On the other hand, the SPS was selected for mobile cellular network to meet stringent latency requirement because it not only eliminates the delay of signalling process by pre-allocating uplink radio periodically but also requires zero consumption in the control channel, which is especially beneficial for increasing the accessibility of a mobile cellular network. However, the mismatch between the pre-defined periodic resource allocation at the station side and the actual resource demand at the user side has resulted in over/under-scheduling problem, which limits the practical implementation of SPS for variable bit rate (VBR) data traffic. In this thesis, we addressed the over/under-scheduling problem by proposing a predictive SPS scheme, which enables dynamic resource allocation that closely matches the actual bandwidth requirement of Tactile users. In addition, a Feasible Resource Allocator (FRA) was proposed to introduce flexibility into resource scheduling and allow coexistence among Tactile and non-Tactile devices. Finally, we further enhanced the scheduling accuracy by incorporating machine learning in the proposed predictive SPS scheme. In particular, we designed an Attention-based prediction model that implements dimension expansion at the pre-processing stage and introduces an initial input containing learnable parameters to the decoder, such that highest prediction accuracy was achieved during the training stage. It is important to note that simulations carried out for the proposed predictive SPS scheme and the Attention-based prediction model were based on traced data traffic, which was generated by haptic-tactile experiments. Simulation results revealed that a sub-3 ms uplink latency in a mobile cellular network could be realised by our proposed SPS scheme with a simple autoregressive prediction model, and the uplink latency performance could be further improved by our Attention-based model.
Flexibility and Grid Services from Distributed Multi-energy System
The energy system is in the transition towards a low carbon future. Large-scale renewable energy resources (RES) and distributed energy resources (DER) are replacing conventional generators, which places great challenges on the tra-ditional energy systems. The intermittent and uncontrollable nature of RES and lack of visibility of DER create more imbalances of supply and demand, which increase the demand for frequency control. On the other hand, the withdrawal of synchronous generators which are traditional grid services providers, further reduces system security. Additional flexibility and new grid services providers need to be sought in order to successfully integrate these emerging technolo-gies while maintaining the reliability and security of the system. Although the significance of consumer participation is seen as the “the heart of the transition”, the flexibility from consumers, DER, and other energy vectors and sectors, is yet untapped. This thesis aims to study the potential and possible ways of exploiting flexibility from such distributed multi-energy sys-tems (DMES), so as to aid the integration of RES. In this context, a comprehen-sive, integrated techno-economic modeling framework is developed, in order to identify, quantify and optimize the flexibility from DMES and develop new rel-evant business cases. This includes, a high-resolution multi-energy demand model to understand the “building block” of the energy system analysis, a mul-ti-markets, multi-services co-optimization model for optimal DMES operation, and a business case assessment model and an investment model for planning DMES under uncertainty to support new business cases. The power of the abovementioned contributions is demonstrated through various realistic case studies.
Novel Defenses Against Data Poisoning in Adversarial Machine Learning
Machine learning models are increasingly being used for automated decision making in a wide range of domains such as security, finance, and communications. Machine learning algorithms are built upon the assumption that the training data and test data have the same underlying distribution. This assumption fails when (i) data naturally evolves, causing the test data distribution to diverge from the training data distribution, and (ii) malicious adversaries distort the training data (i.e., poisoning attacks), which is the focus of this thesis. Even though machine learning algorithms are used widely, there is a growing body of literature suggesting that their prediction performance degrades significantly in the presence of maliciously poisoned training data. The performance degradation can mainly be attributed to the fact that most machine learning algorithms are designed to withstand stochastic noise in data, but not malicious distortions. Through malicious distortions, adversaries aim to force the learner to learn a model that differs from the model it would have learned had the training data been pristine. With the models being compromised, any systems that rely on the models for automated decision making would be compromised as well. This thesis presents novel defences for machine learning algorithms to avert the effects of poisoning attacks. We investigate the impact of sophisticated poisoning attacks on machine learning algorithms such as Support Vector Machines (SVMs), one-class Support Vector Machines (OCSVMs) and regression models, and introduce new defences that can be incorporated into these models to achieve more secure decision making. Specifically, two novel approaches are presented to address the problem of learning under adversarial conditions as follows. The first approach is based on data projections, which compress the data, and we examine the effect of the projections on adversarial perturbations. By projecting the training data to lower-dimensional spaces in selective directions, we aim to minimize the impact of adversarial feature perturbations on the training model. The second approach uses Local Intrinsic Dimensionality (LID), a metric that characterizes the dimension of the local subspace in which data samples lie, to distinguish data samples that may have been perturbed (feature perturbation or label flips). This knowledge is then incorporated into existing learning algorithms in the form of sample weights to reduce the impact of poisoned samples. In summary, this thesis makes a major contribution to research on adversarial machine learning by (i) investigating the effects of sophisticated attacks on existing machine learning models and (ii) developing novel defences that increase the attack resistance of existing models. All presented work is supported by theoretical analysis, empirical results, and is based on publications.
A Model-based Approach for High Performance Motion Control in Industrial Machines
Industrial robotics typically consider laser/water cutting, grinding, etc. Within these machines, the motion controller is responsible for the positioning of the end effector. The performance of the motion controller directly influences the quality of the resulting product as tolerance/accuracy are surrogates for machining quality. This is particularly relevant in tracking and contouring applications when the system has structural flexibility, and no direct feedback measurement of the end-effector position is available. Traditional control architectures in machining are unable to explicitly bound tracking and/or contouring errors, and conservative operation is used to ensure satisfactory performance of the overall system. Bounding errors without unduly compromising machine throughput requires advanced control algorithms. The development of such algorithms is the focus of this thesis. Although numerous control methods are proposed, the proportional integral derivative (PID) based cascaded control is still the most prevalent in the industry. Based on this fact, the research starts by objectively assessing the tracking control performance on a single-axis industrial platform. The results provide practitioners with an in-depth understanding of the benefits and limitations of existing control algorithms as well as the motivation to consider advanced controllers as alternatives to the PID-based approach. For the single-axis tracking problem, this research proposes a model predictive based approach that guarantees a desired level of tracking error is met for the cases where the structure is flexible and the end-effector position is estimated. To achieve this, a robust control invariant set is estimated using a computationally tractable algorithm and incorporated into the problem formulation. The applicability of the proposed approach is successfully demonstrated via simulation and experiments conducted on a commercial single-axis system. In terms of biaxial applications, the dual-drive gantry machines are widely used in industry for manufacturing. However, the non-synchronised movement of the dual drive may lead to deterioration in contouring accuracy. In this research, we propose two model predictive based control architectures based on the switched linear time invariant control-oriented models, that is able to guarantee a two-dimensional contouring tolerance in the presence of uncertainty arising from imperfect drive synchronisation. The performance and computational tractability of the proposed approach are demonstrated using high fidelity simulation and experiment.