Computing and Information Systems - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 6 of 6
  • Item
    No Preview Available
    Benchmarking adversarially robust quantum machine learning at scale
    West, MT ; Erfani, SM ; Leckie, C ; Sevior, M ; Hollenberg, LCL ; Usman, M (American Physical Society (APS), 2023-04-01)
    Machine learning (ML) methods such as artificial neural networks are rapidly becoming ubiquitous in modern science, technology, and industry. Despite their accuracy and sophistication, neural networks can be easily fooled by carefully designed malicious inputs known as adversarial attacks. While such vulnerabilities remain a serious challenge for classical neural networks, the extent of their existence is not fully understood in the quantum ML setting. In this paper, we benchmark the robustness of quantum ML networks, such as quantum variational classifiers (QVC), at scale by performing rigorous training for both simple and complex image datasets and through a variety of high-end adversarial attacks. Our results show that QVCs offer a notably enhanced robustness against classical adversarial attacks by learning features, which are not detected by the classical neural networks, indicating a possible quantum advantage for ML tasks. Contrarily, and remarkably, the converse is not true, with attacks on quantum networks also capable of deceiving classical neural networks. By combining quantum and classical network outcomes, we propose an adversarial attack detection technology. Traditionally quantum advantage in ML systems has been sought through increased accuracy or algorithmic speed-up, but our study has revealed the potential for a kind of quantum advantage through superior robustness of ML models, whose practical realization will address serious security concerns and reliability issues of ML algorithms employed in a myriad of applications including autonomous vehicles, cybersecurity, and surveillance robotic systems.
  • Item
    No Preview Available
    Towards quantum enhanced adversarial robustness in machine learning
    West, MT ; Tsang, S-L ; Low, JS ; Hill, CD ; Leckie, C ; Hollenberg, LCL ; Erfani, SM ; Usman, M (NATURE PORTFOLIO, 2023-06)
  • Item
    No Preview Available
    Adversarial Coreset Selection for Efficient Robust Training
    Dolatabadi, HM ; Erfani, SM ; Leckie, C (SPRINGER, 2023-12)
    Abstract It has been shown that neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbations to their input can modify their output. Adversarial training is one of the most effective approaches to training robust models against such attacks. Unfortunately, this method is much slower than vanilla training of neural networks since it needs to construct adversarial examples for the entire training data at every iteration. By leveraging the theory of coreset selection, we show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training. To this end, we first provide convergence guarantees for adversarial coreset selection. In particular, we show that the convergence bound is directly related to how well our coresets can approximate the gradient computed over the entire training data. Motivated by our theoretical analysis, we propose using this gradient approximation error as our adversarial coreset selection objective to reduce the training set size effectively. Once built, we run adversarial training over this subset of the training data. Unlike existing methods, our approach can be adapted to a wide variety of training objectives, including TRADES, $$\ell _p$$ ℓ p -PGD, and Perceptual Adversarial Training. We conduct extensive experiments to demonstrate that our approach speeds up adversarial training by 2–3 times while experiencing a slight degradation in the clean and robust accuracy.
  • Item
    Thumbnail Image
    Exponentially Weighted Ellipsoidal Model for Anomaly Detection
    Moshtaghi, M ; Erfani, SM ; Leckie, C ; Bezdek, JC (WILEY, 2017-09)
  • Item
    Thumbnail Image
    Online cluster validity indices for performance monitoring of streaming data clustering
    Moshtaghi, M ; Bezdek, JC ; Erfani, SM ; Leckie, C ; Bailey, J (WILEY-HINDAWI, 2019-04)
  • Item
    Thumbnail Image
    Support vector machines resilient against training data integrity attacks
    Weerasinghe, S ; Erfani, SM ; Alpcan, T ; Leckie, C (Elsevier BV, 2019-12-01)
    Support Vector Machines (SVMs) are vulnerable to integrity attacks, where malicious attackers distort the training data in order to compromise the decision boundary of the learned model. With increasing real-world applications of SVMs, malicious data that is classified as innocuous may have harmful consequences. This paper presents a novel framework that utilizes adversarial learning, nonlinear data projections, and game theory to improve the resilience of SVMs against such training-data-integrity attacks. The proposed approach introduces a layer of uncertainty through the use of random projections on top of the learners, making it challenging for the adversary to guess the specific configurations of the learners. To find appropriate projection directions, we introduce novel indices that ensure the contraction of the data and maximize the detection accuracy. Experiments with benchmark data sets show increases in detection rates up to 13.5% for OCSVMs and up to 14.1% for binary SVMs under different attack algorithms when compared with the respective base algorithms.