Electrical and Electronic Engineering - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 1 of 1
  • Item
    Thumbnail Image
    Adversarial Robustness in High-Dimensional Deep Learning
    Karanikas, Gregory Jeremiah ( 2021)
    As applications of deep learning continue to be discovered and implemented, the problem of robustness becomes increasingly important. It is well established that deep learning models have a serious vulnerability against adversarial attacks. Malicious attackers targeting learning models can generate so-called "adversarial examples'' that are able to deceive the models. These adversarial examples can be generated from real data by adding small perturbations in specific directions. This thesis focuses on the problem of explaining vulnerability (of neural networks) to adversarial examples, an open problem which has been addressed from various angles in the literature. The problem is approached geometrically, by considering adversarial examples as points which lie close to the decision boundary in a high-dimensional feature space. By invoking results from high-dimensional geometry, it is argued that adversarial robustness is impacted by high data dimensionality. Specifically, an upper bound on robustness which decreases with dimension is derived, subject to a few mathematical assumptions. To test this idea that adversarial robustness is affected by dimensionality, we perform experiments where robustness metrics are compared after training neural network classifiers on various dimension-reduced datasets. We use MNIST and two cognitive radio datasets for our experiments, and we compute the attack-based empirical robustness and attack-agnostic CLEVER score, both of which are approximations of true robustness. These experiments show correlations between adversarial robustness and dimension in certain cases.