Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 1 of 1
  • Item
    Thumbnail Image
    Machine learning with adversarial perturbations and noisy labels
    Ma, Xingjun ( 2018)
    Machine learning models such as traditional random forests (RFs) and modern deep neural networks (DNNs) have been successfully used to solve complex learning problems in many applications such as speech recognition, image classification, face recognition, gaming agents and self-driving cars. For example, DNNs have demonstrated near or even surpassing human-level performance in image classification tasks. Despite their current success, these models are still vulnerable to noisy real-world situations where illegitimate or noisy data may exist to corrupt learning. Studies have shown that by adding small, human imperceptible (in the case of images) adversarial perturbations, normal samples can be perturbed into "adversarial examples'', and DNNs can be made to misclassify adversarial examples with a high level of confidence. This arouses security concerns when employing DNNs in security-sensitive applications such as fingerprint recognition, face verification and autonomous cars. Studies have also found that DNNs can overfit to noisy (incorrect) labels and as a result, generalize poorly. This has been one of the key challenges when applying DNNs in noisy real-world scenarios where even high-quality datasets tend to contain noisy labels. Another open question in machine learning is whether actionable knowledge (or "feedback") can be generated from prediction models to support decision making towards some long-term learning goals (for example, mastering a certain type of skills in a simulation-based learning (SBL) environment). We view the feedback generation problem from a new perspective of adversarial perturbation, and explore the possibility of using adversarial techniques to generate feedback. In this thesis, we investigate machine learning models including DNNs and RFs, and their learning behavior through the lens of adversarial perturbations and noisy labels, with the aim of achieving more secure and robust machine learning. We also explore the possibility of using adversarial techniques in a real-world application: to support skill acquisition in SBL environments through the provision of performance feedback. The first part of our work is on the investigation of DNNs and their vulnerability to adversarial perturbations, in the context of image classification. In contrast to existing work, we develop new understandings of adversarial perturbations by exploring DNN representation space with the Local Intrinsic Dimensionality (LID) measure. In particular, we characterize adversarial subspaces in the vicinity of adversarial examples using LID, and find that adversarial subspaces are of higher intrinsic dimensionality than normal data subspaces. We not only provide a theoretical explanation of the high dimensionality of adversarial subspaces, but also empirically demonstrate that such properties can be used to effectively discriminate adversarial examples generated using state-of-the-art attacking methods. The second part of our work is to explore the possibility of using adversarial techniques in a beneficial way to generate interactive feedback for intelligent tutoring in SBL environments. Feedback is actions (in the form of feature changes) generated from a pre-trained prediction model that can be delivered to a leaner in an SBL environment to correct mistakes or improve skills. We demonstrate that such feedback can be generated accurately and efficiently using properly constrained adversarial techniques with DNNs. In addition to DNNs, we also explore, in the third part of our work, adversarial feedback generation from RF models. Adversarial perturbations can be easily generated from DNNs using gradient descent and backpropagation, however, it is still an open question whether such perturbations can be generated from models such as RFs that do not work with gradients. This part of our work confirms that adversarial perturbations can also be crafted from RFs for the provision of feedback in SBL. In particular, we propose a perturbation method that can find the optimal space transition from one undesired class (e.g. 'novice') to the desired class (e.g. 'expert'), based on a geometric view of the RF decision space as overlapping high dimensional rectangles. We demonstrate empirically that our proposed method has high effectiveness as well as high efficiency when compared to existing methods, making it possible to be used for real-time feedback generation in SBL. The fourth part of our work focuses on DNNs and noisy label learning: training accurate DNNs on data with noisy labels. In this work, we investigate the learning behaviours of DNNs, and show that DNNs exhibit two distinct learning styles when trained on clean versus noisy labels. A LID-based characterization of the intrinsic dimensionality of DNN subspace (inspired by the first part of our work) allows us to identify the two stages of learning from dimensionality compression to dimensionality expansion on datasets with noisy labels. Based on the observation that dimensionality expansion is associated with overfitting to noisy labels, we further propose a heuristic learning strategy to avoid the later stage of dimensionality expansion, so as to robustly train DNNs in the presence of noisy labels. In summary, this work has contributed significantly to existing knowledge through: novel dimensional characterization of DNNs, effective discrimination of adversarial attacks, robust deep learning strategies against noisy labels, and novel approaches to feedback generation. All work is supported by theoretical analysis, empirical results and publications.