Computing and Information Systems - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 33
  • Item
    No Preview Available
    Benchmarking adversarially robust quantum machine learning at scale
    West, MT ; Erfani, SM ; Leckie, C ; Sevior, M ; Hollenberg, LCL ; Usman, M (American Physical Society (APS), 2023-04-01)
    Machine learning (ML) methods such as artificial neural networks are rapidly becoming ubiquitous in modern science, technology, and industry. Despite their accuracy and sophistication, neural networks can be easily fooled by carefully designed malicious inputs known as adversarial attacks. While such vulnerabilities remain a serious challenge for classical neural networks, the extent of their existence is not fully understood in the quantum ML setting. In this paper, we benchmark the robustness of quantum ML networks, such as quantum variational classifiers (QVC), at scale by performing rigorous training for both simple and complex image datasets and through a variety of high-end adversarial attacks. Our results show that QVCs offer a notably enhanced robustness against classical adversarial attacks by learning features, which are not detected by the classical neural networks, indicating a possible quantum advantage for ML tasks. Contrarily, and remarkably, the converse is not true, with attacks on quantum networks also capable of deceiving classical neural networks. By combining quantum and classical network outcomes, we propose an adversarial attack detection technology. Traditionally quantum advantage in ML systems has been sought through increased accuracy or algorithmic speed-up, but our study has revealed the potential for a kind of quantum advantage through superior robustness of ML models, whose practical realization will address serious security concerns and reliability issues of ML algorithms employed in a myriad of applications including autonomous vehicles, cybersecurity, and surveillance robotic systems.
  • Item
    No Preview Available
    Towards quantum enhanced adversarial robustness in machine learning
    West, MT ; Tsang, S-L ; Low, JS ; Hill, CD ; Leckie, C ; Hollenberg, LCL ; Erfani, SM ; Usman, M (NATURE PORTFOLIO, 2023-06)
  • Item
    No Preview Available
    Adversarial Coreset Selection for Efficient Robust Training
    Dolatabadi, HM ; Erfani, SM ; Leckie, C (SPRINGER, 2023-12)
    Abstract It has been shown that neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbations to their input can modify their output. Adversarial training is one of the most effective approaches to training robust models against such attacks. Unfortunately, this method is much slower than vanilla training of neural networks since it needs to construct adversarial examples for the entire training data at every iteration. By leveraging the theory of coreset selection, we show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training. To this end, we first provide convergence guarantees for adversarial coreset selection. In particular, we show that the convergence bound is directly related to how well our coresets can approximate the gradient computed over the entire training data. Motivated by our theoretical analysis, we propose using this gradient approximation error as our adversarial coreset selection objective to reduce the training set size effectively. Once built, we run adversarial training over this subset of the training data. Unlike existing methods, our approach can be adapted to a wide variety of training objectives, including TRADES, $$\ell _p$$ ℓ p -PGD, and Perceptual Adversarial Training. We conduct extensive experiments to demonstrate that our approach speeds up adversarial training by 2–3 times while experiencing a slight degradation in the clean and robust accuracy.
  • Item
    Thumbnail Image
    Exploiting patterns to explain individual predictions
    Jia, Y ; Bailey, J ; Ramamohanarao, K ; Leckie, C ; Ma, X (Springer London, 2020-03)
    Users need to understand the predictions of a classifier, especially when decisions based on the predictions can have severe consequences. The explanation of a prediction reveals the reason why a classifier makes a certain prediction, and it helps users to accept or reject the prediction with greater confidence. This paper proposes an explanation method called Pattern Aided Local Explanation (PALEX) to provide instance-level explanations for any classifier. PALEX takes a classifier, a test instance and a frequent pattern set summarizing the training data of the classifier as inputs, and then outputs the supporting evidence that the classifier considers important for the prediction of the instance. To study the local behavior of a classifier in the vicinity of the test instance, PALEX uses the frequent pattern set from the training data as an extra input to guide generation of new synthetic samples in the vicinity of the test instance. Contrast patterns are also used in PALEX to identify locally discriminative features in the vicinity of a test instance. PALEX is particularly effective for scenarios where there exist multiple explanations. In our experiments, we compare PALEX to several state-of-the-art explanation methods over a range of benchmark datasets and find that it can identify explanations with both high precision and high recall.
  • Item
    No Preview Available
    Generative Adversarial Networks for anomaly detection on decentralised data
    Katzefa, M ; Cullen, AC ; Alpcan, T ; Leckie, C (PERGAMON-ELSEVIER SCIENCE LTD, 2022)
  • Item
    Thumbnail Image
    On the effectiveness of isolation-based anomaly detection in cloud data centers
    Calheiros, RN ; Ramamohanarao, K ; Buyya, R ; Leckie, C ; Versteeg, S (WILEY, 2017-09-25)
    Summary The high volume of monitoring information generated by large‐scale cloud infrastructures poses a challenge to the capacity of cloud providers in detecting anomalies in the infrastructure. Traditional anomaly detection methods are resource‐intensive and computationally complex for training and/or detection, what is undesirable in very dynamic and large‐scale environment such as clouds. Isolation‐based methods have the advantage of low complexity for training and detection and are optimized for detecting failures. In this work, we explore the feasibility of Isolation Forest, an isolation‐based anomaly detection method, to detect anomalies in large‐scale cloud data centers. We propose a method to code time‐series information as extra attributes that enable temporal anomaly detection and establish its feasibility to adapt to seasonality and trends in the time‐series and to be applied online and in real‐time.
  • Item
    Thumbnail Image
    Exponentially Weighted Ellipsoidal Model for Anomaly Detection
    Moshtaghi, M ; Erfani, SM ; Leckie, C ; Bezdek, JC (WILEY, 2017-09)
  • Item
    Thumbnail Image
    Comparative evaluation of performance measures for shading correction in time-lapse fluorescence microscopy
    Liu, L ; Kan, A ; Leckie, C ; Hodgkin, PD (WILEY, 2017-04)
    Time-lapse fluorescence microscopy is a valuable technology in cell biology, but it suffers from the inherent problem of intensity inhomogeneity due to uneven illumination or camera nonlinearity, known as shading artefacts. This will lead to inaccurate estimates of single-cell features such as average and total intensity. Numerous shading correction methods have been proposed to remove this effect. In order to compare the performance of different methods, many quantitative performance measures have been developed. However, there is little discussion about which performance measure should be generally applied for evaluation on real data, where the ground truth is absent. In this paper, the state-of-the-art shading correction methods and performance evaluation methods are reviewed. We implement 10 popular shading correction methods on two artificial datasets and four real ones. In order to make an objective comparison between those methods, we employ a number of quantitative performance measures. Extensive validation demonstrates that the coefficient of joint variation (CJV) is the most applicable measure in time-lapse fluorescence images. Based on this measure, we have proposed a novel shading correction method that performs better compared to well-established methods for a range of real data tested.
  • Item
    Thumbnail Image
    Online cluster validity indices for performance monitoring of streaming data clustering
    Moshtaghi, M ; Bezdek, JC ; Erfani, SM ; Leckie, C ; Bailey, J (WILEY-HINDAWI, 2019-04)
  • Item
    Thumbnail Image
    A time decoupling approach for studying forum dynamics
    Kan, A ; Chan, J ; Hayes, C ; Hogan, B ; Bailey, J ; Leckie, C (SPRINGER, 2013-11)