Electrical and Electronic Engineering - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 2 of 2
  • Item
    Thumbnail Image
    Adversarial Robustness in High-Dimensional Deep Learning
    Karanikas, Gregory Jeremiah ( 2021)
    As applications of deep learning continue to be discovered and implemented, the problem of robustness becomes increasingly important. It is well established that deep learning models have a serious vulnerability against adversarial attacks. Malicious attackers targeting learning models can generate so-called "adversarial examples'' that are able to deceive the models. These adversarial examples can be generated from real data by adding small perturbations in specific directions. This thesis focuses on the problem of explaining vulnerability (of neural networks) to adversarial examples, an open problem which has been addressed from various angles in the literature. The problem is approached geometrically, by considering adversarial examples as points which lie close to the decision boundary in a high-dimensional feature space. By invoking results from high-dimensional geometry, it is argued that adversarial robustness is impacted by high data dimensionality. Specifically, an upper bound on robustness which decreases with dimension is derived, subject to a few mathematical assumptions. To test this idea that adversarial robustness is affected by dimensionality, we perform experiments where robustness metrics are compared after training neural network classifiers on various dimension-reduced datasets. We use MNIST and two cognitive radio datasets for our experiments, and we compute the attack-based empirical robustness and attack-agnostic CLEVER score, both of which are approximations of true robustness. These experiments show correlations between adversarial robustness and dimension in certain cases.
  • Item
    Thumbnail Image
    Multi-observer approach for estimation and control under adversarial attacks
    Yang, Tianci ( 2019)
    Traditional control systems composed of interconnected controllers, sensors, and actuators use point-to-point communication architectures. This is no longer suitable when new requirements -- such as modularity, decentralisation of control, integrated diagnostics, quick and easy maintenance, and low cost -- are necessary. To meet these requirements, Networked Control Systems (NCSs) have emerged as a technology that combines control, communication, and computation, and offers the necessary flexibility to meet new demands in distributed and large scale systems. However, these new architectures, especially wireless NCSs, are more susceptible to adversarial attacks. For instance, one of the most well-known examples of attacks on NCSs is the StuxNet virus that targeted Siemens' supervisory control and data acquisition systems which are used in many industrial processes. Another very recent incident is the attack on the Ukraine power grid system, where an adversarial attack caused a power outage affecting more than 80,000 people for almost 3 hours. These incidents (and many other not mentioned here) show that there is an acute need for strategic defence mechanisms to identify and deal with adversarial attacks on NCSs. In this thesis, based on sensor and actuator redundancy, we develop a ``multi-observer based estimation framework'' to address the problem of state estimation for discrete-time nonlinear systems with general dynamics under sensor and actuator false data injection attacks. Although there exist results in the literature addressing similar problems, in general, they are only applicable to some specific classes of nonlinear systems. To the best of the author's knowledge, a unifying estimation framework that works for general nonlinear systems in the presence of attacks has not been proposed. The estimation scheme provided here can be applied to a large class of nonlinear systems as long as a bank of observers with certain stability properties exist. Once an estimate of the system states is obtained from the multi-observer estimator, we provide detection and isolation algorithms for attack detection and for identifying attacked sensors and actuators. For nonlinear systems in the presence of sensor attacks, process disturbance and measurement noise, we detect and isolate attacked sensors by designing multiple observers and comparing their estimates. For noise-free nonlinear systems under sensor and actuator attacks, we isolate attacked sensors and actuators by reconstructing the attack signals. Furthermore, for LTI systems, we provide a simple yet effective control method to stabilize the system despite of sensor and actuator attacks by switching off the isolated actuators and closing the system loop with the proposed estimator and a switching output feedback controller. Finally, we use a class of nonlinear systems with positive-slope nonlinearities under sensor attacks and measurement noise as a detailed case study where we provide a deeper discussion about the tools that we propose. In particular, we give sufficient conditions under which our tools are guaranteed to work; we also give sufficient conditions under which such methods cannot work. These results have been published in our previous conference papers.