Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 1 of 1
  • Item
    Thumbnail Image
    A Novel Perspective on Robustness in Deep Learning
    Mohaghegh Dolatabadi, Hadi ( 2022)
    Nowadays, machine learning plays a crucial role in our path toward automated decision-making. Traditional machine learning algorithms would require careful, often manual, feature engineering to deliver satisfactory results. Deep Neural Networks (DNNs) have shown great promise in an attempt to automate this process. Today, DNNs are the primary candidate for various applications, from object detection to high-dimensional density estimation and beyond. Despite their impressive performance, DNNs are vulnerable to different security threats. For instance, in adversarial attacks, an adversary can alter the output of a DNN for its benefit by adding carefully crafted yet imperceptible distortions to clean samples. As another example, in backdoor (Trojan) attacks, an adversary intentionally plants a loophole in the DNN during the learning process. This is often done via attaching specific triggers to the benign samples during training such that the model creates an association between the trigger and a particularly intended output. Once such a loophole is planted, the attacker can activate the backdoor with the learned triggers and bypass the model. All these examples demonstrate the fragility of DNNs in their decision-making, which questions their widespread use in safety-critical applications such as autonomous driving. This thesis studies these vulnerabilities in DNNs from novel perspectives. To this end, we identify two key challenges in the previous studies around the robustness of neural networks. First, while a plethora of existing algorithms can robustify DNNs against attackers to some extent, these methods often lack the efficiency required for their use in real-world applications. Second, the true nature of these adversaries has been less studied, leading to unrealistic assumptions about their behavior. This is particularly crucial as building defense mechanisms using such assumptions would fail to address the underlying threats and create a false belief in the security of DNNs. This thesis studies the first challenge in the context of robust DNN training. In particular, we leverage the theory of coreset selection to form informative weighted subsets of data. We use this framework in two different settings. First, we develop an online algorithm for filtering poisonous data to prevent backdoor attacks. Specifically, we identify two critical properties of poisonous samples based on their gradient space and geometrical representation and define an appropriate selection objective based on these criteria to select clean samples. Second, we extend the idea of coreset selection to adversarial training of DNNs. Although adversarial training is one of the most effective methods in defending DNNs against adversarial attacks, it requires generating costly adversarial examples for each training sample iteratively. To ease the computational burden of various adversarial training methods in a unified manner, we build a weighted subset of the training data that can faithfully approximate the DNN gradient. We show how our proposed solution can lead to robust neural network training more efficiently in both of these scenarios. Then, we touch upon the second challenge and question the validity of one of the widely used assumptions around adversarial attacks. More precisely, it is often assumed that adversarial examples stem from an entirely different distribution than clean data. To challenge this assumption, we resort to generative modeling, particularly Normalizing Flows (NF). Using an NF model pre-trained on clean data, we demonstrate how one can create adversarial examples closely following the clean data distribution. We then use our approach against state-of-the-art adversarial example detection methods to show that methods that explicitly assume a difference in the distribution of adversarial attacks vs. clean data might greatly suffer. Our study reveals the importance of correct assumptions in treating adversarial threats. Finally, we extend the distribution modeling component of our adversarial attacker to increase its density estimation capabilities. In summary, this thesis advances the current state of robustness in deep learning by i) proposing more effective training algorithms against backdoor and adversarial attacks and ii) challenging a fundamental prevalent misconception about the distributional properties of adversarial threats. Through these contributions, we aim to help create more robust neural networks, which is crucial before their deployment in real-world applications. Our work is supported by theoretical analysis and experimental investigations based on publications.