Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 243
  • Item
    Thumbnail Image
    Lazy Constraint Generation and Tractable Approximations for Large Scale Planning Problems
    Singh, Anubhav ( 2023-12)
    In our research, we explore two orthogonal but related methodologies of solving planning instances: planning algorithms based on direct but lazy, incremental heuristic search over transition systems and planning as satisfiability. We address numerous challenges associated with solving large planning instances within practical time and memory constraints. This is particularly relevant when solving real-world problems, which often have numeric domains and resources and, therefore, have a large ground representation of the planning instance. Our first contribution is an approximate novelty search, which introduces two novel methods. The first approximates novelty via sampling and Bloom filters, and the other approximates the best-first search using an adaptive policy that decides whether to forgo the expansion of nodes in the open list. For our second work, we present an encoding of the partial order causal link (POCL) formulation of the temporal planning problems into a CP model that handles the instances with required concurrency, which cannot be solved using sequential planners. Our third significant contribution is on lifted sequential planning with lazy constraint generation, which scales very well on large instances with numeric domains and resources. Lastly, we propose a novel way of using novelty approximation as a polynomial reachability propagator, which we use to train the activity heuristics used by the CP solvers.
  • Item
    Thumbnail Image
    Robust and Trustworthy Machine Learning
    Huang, Hanxun ( 2024-01)
    The field of machine learning (ML) has undergone rapid advancements in recent decades. The primary objective of ML models is to extract meaningful patterns from vast amounts of data. One of the most successful models, deep neural networks (DNNs), have been deployed in many real-world applications, such as face recognition, medical image analysis, gaming agents, autonomous driving and chatbots. Current DNNs, however, are vulnerable to adversarial perturbations, where an adversary can craft malicious perturbations to manipulate these models. For example, they can inject backdoor patterns into the training data, allowing them to control the model’s prediction with the backdoor pattern (known as a backdoor attack). Also, an adversary can introduce imperceptible adversarial noise to an image and change the prediction of a trained DNN with high confidence (known as an adversarial attack). These vulnerabilities of DNNs raise security concerns, particularly if deployed in safety-critical applications. The current success of DNNs relies on the volume of “free” data on the internet. A recent news article revealed that a company trains large-scale commercial models using personal data obtained from social media, which raises serious privacy concerns. This has led to an open question regarding whether or not data can be made unlearnable for DNNs. Unlike backdoor attacks, unlearnable data do not seek to control the model maliciously but only prevent the model from learning meaningful patterns in the data. Recent advancements in self-supervised learning (SSL) have shown promise in enabling models to learn from data without the need for human supervision. Annotating largescale datasets can be time-consuming and expensive, making SSL an attractive alternative. However, one challenge with SSL is the potential for dimensional collapse in the learned representations. This occurs when many features are highly correlated, giving rise to an “underfilling” phenomenon whereby the data spans only a lower-dimensional subspace. This can reduce the utility of a representation for downstream learning tasks. The first part of this thesis investigates defense strategies against backdoor attacks. Specifically, we develop a robust backdoor data detection method under the poisoning attacks threat model. We introduce a novel backdoor sample detection method Cognitive Distilation (CD). It extracts the minimal essence of features in the input image responsible for the model’s prediction. Through an optimization process, features that are not important are removed. For data containing backdoor triggers, only a small portion of semantic meaningless features are important for calssification, while clean data contains a larger amount of useful semantic features. Based on this characteristic, CD provides novel insights into existing attacks and can robustly detect backdoor samples. Additionally, the CD also reveals the connection between dataset bias and backdoor attacks. Through a case study, we show CD not only can detect bias matches with existing works but also discover several potential biases in a real-world dataset. The second part of this work examines the defences towards adversarial attacks. Adversarial training is one of the most effective defences. However, despite preliminary understandings developed for adversarial training, it is still not clear, from the architectural perspective, what configurations can lead to more robust DNNs. This work addresses this gap via a comprehensive investigation of the impact of network width and depth on the robustness of adversarially trained DNNs. The theoretical and empirical analysis provides the following insights: (1) more parameters do not necessarily help adversarial robustness; (2) reducing capacity at the last stage (the last group of blocks) of the network can improve adversarial robustness; and (3) under the same parameter budget, there exists an optimal architectural configuration for adversarial robustness. These architectural insights can help design adversarially robust DNNs. The third part of this thesis addresses the question of whether or not data can be made unexploitable for DNNs. This work introduces a novel concept, the unlearnable examples, which DNNs cannot learn useful features on such data. The unlearnable examples are generated through error-minimizing noise, which intentionally reduces the error of one or more of the training example(s) close to zero. Consequently, DNNs believe there is “nothing” worth learning from these example(s). The noise is restricted to be imperceptible to human eyes and thus does not affect normal data utility. This work demonstrates its flexibility under extensive experimental settings and practicability in a case study of face recognition. The fourth part of this thesis studies robust regularization techniques to address dimension collapse in SSL. Previous work has considered dimensional collapse at a global level. In this thesis, we demonstrate that learned representations can span over high dimensional space globally but collapse locally. To address this, we propose a method called local dimensional regularization (LDReg). Our formulation is based on the derivation of the Fisher-Rao metric to compare and optimize local distance distributions at an asymptotically small radius for each point. By increasing the local intrinsic dimensionality, we demonstrate through a range of experiments that LDReg improves the representation quality of SSL. The empirical results also show that LDReg can regularize dimensionality at both local and global levels. In summary, this work has contributed significantly toward robust and trustworthy machine learning. It includes the detection of backdoor samples, the development of robust architectures against adversarial examples, the introduction of unlearnable examples and a robust regularization to prevent dimension collapse in self-suerpvised learning.
  • Item
    Thumbnail Image
    A Toolkit for Semantic Localisation Analysis
    Marini, Gabriele ( 2023-11)
    While UbiComp research has steadily improved the performance of localisation systems, the analysis of such datasets remains largely unaddressed. We present a tool to facilitate the querying and analysis of localisation time-series with a focus on semantic localisation. We developed a conceptual framework based on the idea of strongly-typed spaces, represented as symbolic coordinates. We also demonstrate its power and flexibility through an implementation of the framework and its application on a real-life case indoor localisation scenario.
  • Item
    Thumbnail Image
    Explainable Computer Vision with Unsupervised Concept-based Explanations
    ZHANG, Ruihan ( 2023-10)
    This thesis focuses on concept-based explanations for deep learning models in the computer vision domain with unsupervised concepts. The success of deep learning methods significantly improves the performance of computer vision models. However, the quickly growing complexity of the models makes explainability a more important research focus. One of the major issues in computer vision explainability is that it is unclear what the appropriate features are that can be used in the explanations. Pixels are less understandable features compared with other domains like natural language processing with words as features. In recent years, concepts, that refer to the shared knowledge between human and AI systems with feature maps inside the deep learning model provide significant performance improvement as features in the explanations. Concept-based explanations become a good choice for explainability in computer vision. In most tasks, the supervised concept is the standard choice with better performance. Nevertheless, the concept learning task in supervised concept-based explanations additionally requires a dataset with a designed concept set and instance-level concept labels. Unsupervised concepts could reduce manual workload. In this thesis, we aim to reduce the performance gap between unsupervised and supervised concepts for concept-based explanations in computer vision. Targeting the baseline of concept bottleneck models (CBM) with supervised concepts, combined with the advances that unsupervised concepts do not require the concept set designing and labeling, the core contributions in this thesis make the unsupervised concepts an attractive alternative choice for concept-based explanations. Our core contributions are as follows: 1) We propose a new concept learning algorithm, invertible concept-based explanations (ICE). Explanations with unsupervised concepts can be evaluated with fidelity to the original model, like explanations with supervised concepts. Learned concepts are evaluated to be more understandable than baseline unsupervised concept learning methods like k-means clustering methods from ACE; 2) We propose a general framework of concept-based interpretable models with built-in faithful explanations similar to CBM. The framework makes the comparison between supervised and unsupervised concepts available. We show that unsupervised concepts provide competitive performance with model accuracy and concept interpretability; 3) We propose an example of applications using unsupervised concepts with counterfactual explanations, the fast concept-based counterfactual explanations (FCCE). In the ICE concept space, we propose the analytical solution to the counterfactual loss function. The calculation of counterfactual explanations in concept space takes less than 1e-5 seconds. Also, the FCCE is evaluated to be more interpretable through a human survey. In conclusion, previously, unsupervised concepts are not a choice for concept-based explanations as they suffer from many issues, such as being less interpretable and faithful to supervised concept-based explanations like CBM. With all our core contributions, the accuracy and interoperability performance of unsupervised concepts for concept-based explanations is improved to be competitive with supervised concept-based explanations. Since no extra requirements of concept set design and labeling are required, unsupervised concepts are an attractive choice for concept-based explanations in computer vision with competitive performance to supervised concepts. They also bring the benefit that no manual workload of concept set design and labeling is required.
  • Item
    Thumbnail Image
    Explainable Computer Vision with Unsupervised Concept-based Explanations
    ZHANG, Ruihan ( 2023-10)
    This thesis focuses on concept-based explanations for deep learning models in the computer vision domain with unsupervised concepts. The success of deep learning methods significantly improves the performance of computer vision models. However, the quickly growing complexity of the models makes explainability a more important research focus. One of the major issues in computer vision explainability is that it is unclear what the appropriate features are that can be used in the explanations. Pixels are less understandable features compared with other domains like natural language processing with words as features. In recent years, concepts, that refer to the shared knowledge between human and AI systems with feature maps inside the deep learning model provide significant performance improvement as features in the explanations. Concept-based explanations become a good choice for explainability in computer vision. In most tasks, the supervised concept is the standard choice with better performance. Nevertheless, the concept learning task in supervised concept-based explanations additionally requires a dataset with a designed concept set and instance-level concept labels. Unsupervised concepts could reduce manual workload. In this thesis, we aim to reduce the performance gap between unsupervised and supervised concepts for concept-based explanations in computer vision. Targeting the baseline of concept bottleneck models (CBM) with supervised concepts, combined with the advances that unsupervised concepts do not require the concept set designing and labeling, the core contributions in this thesis make the unsupervised concepts an attractive alternative choice for concept-based explanations. Our core contributions are as follows: 1) We propose a new concept learning algorithm, invertible concept-based explanations (ICE). Explanations with unsupervised concepts can be evaluated with fidelity to the original model, like explanations with supervised concepts. Learned concepts are evaluated to be more understandable than baseline unsupervised concept learning methods like k-means clustering methods from ACE; 2) We propose a general framework of concept-based interpretable models with built-in faithful explanations similar to CBM. The framework makes the comparison between supervised and unsupervised concepts available. We show that unsupervised concepts provide competitive performance with model accuracy and concept interpretability; 3) We propose an example of applications using unsupervised concepts with counterfactual explanations, the fast concept-based counterfactual explanations (FCCE). In the ICE concept space, we propose the analytical solution to the counterfactual loss function. The calculation of counterfactual explanations in concept space takes less than 1e-5 seconds. Also, the FCCE is evaluated to be more interpretable through a human survey. In conclusion, previously, unsupervised concepts are not a choice for concept-based explanations as they suffer from many issues, such as being less interpretable and faithful to supervised concept-based explanations like CBM. With all our core contributions, the accuracy and interoperability performance of unsupervised concepts for concept-based explanations is improved to be competitive with supervised concept-based explanations. Since no extra requirements of concept set design and labeling are required, unsupervised concepts are an attractive choice for concept-based explanations in computer vision with competitive performance to supervised concepts. They also bring the benefit that no manual workload of concept set design and labeling is required.
  • Item
    Thumbnail Image
    Word Associations as a Source of Commonsense Knowledge
    Liu, Chunhua ( 2023-12)
    Commonsense knowledge helps individuals naturally make sense of everyday situations and is important for AI systems to truly understand and interact with humans. However, acquiring such knowledge is difficult due to its implicit nature and sheer size, causing existing large-scale commonsense resources to suffer from a sparsity issue. This thesis addresses the challenge of acquiring commonsense knowledge by using word associations, a resource yet untapped for this purpose in natural language processing (NLP). Word associations are spontaneous connections between concepts that individuals make (e.g., smile and happy), reflecting the human mental lexicon. The aim of this thesis is to complement existing resources like commonsense knowledge graphs and pre-trained language models (PLMs), and enhance models’ ability to reason in a more intuitive and human-like manner. To achieve this aim, we explore three aspects of word associations: (1) understanding the relational knowledge they encode, (2) comparing the content and utility for NLP downstream tasks of large-scale word associations with widely-used commonsense knowledge resources, and (3) improving knowledge extraction from PLMs with word associations. We introduce a crowd-sourced large-scale dataset of word association explanations, which is crucial for disambiguating multiple reasons behind word associations. This resource fills a gap in the cognitive psychology community by providing a dataset to study the rationales and structures underlying associations. By automating the process of labelling word associations with relevant relations, we demonstrate that these explanations enhance the performance of relation extractors. We conduct a comprehensive comparison between large-scale word association networks and the ConceptNet commonsense knowledge graph, analysing their structures, knowledge content, and benefits for commonsense reasoning tasks. Even though we identify systematic differences between the two resources, we find that they both show improvements when incorporated into NLP models. Finally, we propose a diagnostic framework to understand the implicit knowledge encoded in PLMs and identify effective strategies for knowledge extraction. We show that word associations can enhance the quality of extracted knowledge from PLMs. The contributions of this thesis highlight the value of word associations in acquiring commonsense knowledge, offering insights into their utility in cognitive psychology and NLP research.
  • Item
    Thumbnail Image
    Multi-document Summarisation Supporting Clinical Evidence Review
    Otmakhova, Yulia ( 2023-12)
    Summarising (often contradictory) results of multiple clinical trials into conclusions which can be safely implemented by medical professionals in their daily practice is a very important, but highly challenging, task. In this thesis, we tackle it from three directions: we present our domain-specific evaluation framework, construct a new dataset for biomedical multi-document summarisation, and conduct experiments to analyse and improve the performance of summarisation models. We first examine what constitutes a well-formed answer to a clinical question, and define its three components -- PICO elements (biomedical entities), direction of findings, and modality (certainty). Next, we present a framework for human evaluation of biomedical summaries, which is based on these aspects and allows non-expert annotators to assess the factual correctness of conclusions faster and more robustly. Then, we use this framework to highlight issues with summarisation models, and examine the possibility of automating the summary evaluation using large generative language models. Following that, we present our multi-document summarisarion dataset which has several levels of inputs and targets granularity (such as documents, sentences, and claims) as well as rich annotation for the clinical evidence aspects we defined, and use it in several scenarios to test capabilities of existing models. Finally, we turn to the question of synthesing the input studies into conclusions, in particular, reflecting the direction and certainty of findings in summaries. First, we attempt to improve aggregation of entities and their relations using global attention mechanism in a pre-trained multi-document summarisation model. As this proves to be difficult, we examine if the models are at least able to detect modality and direction correctly. For that, we propose a dataset of counterfactual summaries and a method to test the models’ sensitivity to direction and certainty. Finally, we outline our preliminary experiments with a large generative language model, which shows some potential for better aggregation of direction values and PICO elements. Overall, the analysis and proposals in this thesis contribute deeper understanding of what is required of summarisation models to be able to generate useful and reliable multi-document summaries of clinical literature, improve their evaluation in that respect, and make a step towards better modeling choices.
  • Item
    Thumbnail Image
    Trustworthy Machine Learning: From Images to Time Series
    Jiang, Yujing ( 2023-09)
    Deep neural networks (DNNs) have demonstrated remarkable performance in several areas, including computer vision, natural language processing, healthcare and medical imaging, speech recognition and synthesis, and many more. Recent research has highlighted the vulnerability of DNNs to adversarial attacks, which can compromise the security and reliability of machine learning models, leading to misclassifications, unauthorized access, or unintended behaviors, posing significant risks in various applications. Adversarial machine learning has emerged as a critical research area that refers to deliberate and malicious attempts to manipulate or deceive machine learning models by exploiting their vulnerabilities to obtain a desired outcome. Evasion attacks, also known as adversarial perturbations or adversarial examples, involve modifying input data to mislead a machine learning model's predictions. The attacker introduces carefully crafted perturbations, which can be imperceptible to humans, to manipulate the model's output. Another prominent and concerning threat in this context is the backdoor attack, where an adversary manipulates the training process of a machine learning model to introduce a hidden trigger, also known as a backdoor, that can be exploited during the model's deployment. This trigger may not be visually imperceptible and is designed to be activated under specific conditions, such as the presence of certain input features. Once the backdoor is implanted, the attacker can exploit it by providing inputs that activate the trigger, causing the model to produce incorrect or manipulated outputs. Attacks and defenses in adversarial machine learning are key components of research aimed at understanding and mitigating the vulnerabilities of machine learning models to adversarial manipulation. By studying attacks, researchers gain insights into the vulnerabilities of machine learning models and systems. This knowledge helps identify potential weaknesses and develop robust and secure solutions. On the other hand, detecting and mitigating adversarial and backdoor attacks are also important research areas to ensure the integrity and trustworthiness of machine learning systems. This knowledge can be used to develop effective countermeasures, improve model robustness, and enhance overall system security. In this thesis, we investigate the trustworthiness of machine learning models and explore their learning behaviors and characteristics. While we investigate these challenges for computer vision applications, we also transfer this knowledge to time series and conduct investigations on the corresponding challenges, including building novel approaches specifically for time series and constructing an end-to-end model for both images and time series. We also explore the possibility of controlling what information can be learned by machine learning models to protect data privacy and mitigate possible attacks. The first part of our work aims to explore a more efficient and effective way to improve adversarial robustness with adversarial training on images. We propose Dual Head Adversarial Training (DH-AT), an improved variant of AT that attaches a second head to one intermediate layer of the network. The two heads can be trained either simultaneously or independently with different training parameters to combine different levels of robustness in a single model. The main head can also be directly loaded from a pre-trained model without any modifications, in which case only one head requires training. In real-world scenarios, the second head and the lightweight CNN together form a strengthening mechanism to improve the adversarial robustness of any existing models. Additionally, the second head can also be switched off when robustness is no longer the primary concern. Adversarial machine learning has been extensively researched on computer vision applications in the context of images, while there are few works on non-DNN-based time series models. It is still unclear which strategies are more effective on time series. Moreover, time series are of diverse types, such as stock prices, temperature readings, weather data, and heart rate monitoring, to name a few. As such, non-flexible attack patterns can hardly be effective on all types of time series. To fill this gap, in the second part of our work, we study the problem of backdoor attacks on time series and propose a novel generative approach for crafting stealthy sample-specific backdoor trigger patterns. We also reveal the unique challenge of time series backdoor attacks posed by the inherent properties of time series. By leveraging generative adversarial networks (GANs), our approach can generate backdoored time series that are as realistic as real-time series, while achieving a high attack success rate. Furthermore, by training the trigger pattern generator on multiple types of time series, we can obtain a universal generator. We also empirically show that our proposed attack can generate stealthy and effective backdoor attacks against state-of-the-art DNN-based time series models and is resistant to potential backdoor defenses. The third part of our work involves training a robust deep learning model in the presence of backdoor samples. We extend the work of Anti-Backdoor Learning (ABL) and propose a novel End-to-End Anti-Backdoor Learning (E2ABL) method that can be used for both image and time series inputs. Different from the original ABL defense which is a complex two-stage training method, E2ABL achieves end-to-end robust training with the help of a second classification head attached to the shallow layers of a DNN. With the second head, E2ABL traps the potential backdoor samples at the shallow layers and purifies their labels dynamically during training. Through extensive experiments, we empirically show that E2ABL outperforms existing defenses by a considerable margin against 9 state-of-the-art image domain and 3 time series domain backdoor attacks. The fourth part of our work extends unlearnable examples from images to time series that use invisible noise to prevent data from being easily exploited by deep learning models. We propose a specific type of error-minimizing noise that aims to make time series data unlearnable to deep learning models. It can be applied at various scales, ranging from the entire time series input to small patches. Importantly, the noise is designed to be resistant to common data filtering methods, ensuring its persistence in obstructing model learning. In summary, this Ph.D. thesis aims to provide comprehensive insights into the domain of trustworthy machine learning, with a specific focus on backdoor attacks, their detection, and mitigation strategies. By investigating various attack models, detection techniques, and mitigation strategies, this research contributes to the development of more robust and secure machine learning systems. The findings presented in this thesis will serve as a valuable resource for researchers, practitioners, and policymakers working in the field of trustworthy machine learning and cybersecurity.
  • Item
    Thumbnail Image
    BRING-YOUR-OWN-DEVICE (BYOD) SECURITY MANAGEMENT IN HOSPITALS – A SOCIOTECHNICAL APPROACH
    Wani, Tafheem Ahmad ( 2023-09)
    Bring-Your-Own-Device or ‘BYOD’ refers to the use of personal devices such as laptops, smartphones, or tablets for work purposes. Among the top industries driving BYOD is healthcare, with a great demand for BYOD use in hospitals. The multifunctional and ubiquitous nature of modern mobile devices allow them to be used for a variety of purposes. These include clinical documentation, electronic medical record and diagnostic services, clinical photography, clinical communication, and collaboration among other tasks. Overall, BYOD in hospitals can improve mobility and productivity among clinicians. However, BYOD use also leads to data security concerns, particularly due to the risk of leaking sensitive patient information. In a BYOD environment, device owners such as doctors, nurses and allied health professionals may hold significant control and custody of sensitive patient data they access through their personal devices. This extends the scope for risks such as staff misuse and human error, known to be the leading cause of healthcare data breaches, especially in the absence of hospital installed security controls. Furthermore, the stringent healthcare data privacy laws which healthcare organisations need to comply with, coupled with the fact that the healthcare industry is most affected by data breaches make BYOD use a major challenge for hospitals. Previous research about BYOD security management generally has been limited, fragmented, and largely techno-centric. More contextualised, industry-based research into BYOD security is called for. Empirical studies exploring hospital BYOD security challenges are scarce and cover few aspects of the topic. Modern healthcare cybersecurity breaches also demand for a systematic and holistic approach in understanding hospital BYOD security. This thesis therefore aimed to address these gaps by investigating hospital BYOD security management through a holistic socio-technical lens. The PPT (People-Policy-Technology) model was used to explore cultural, organisational, managerial and policy related factors and their impact on hospital BYOD security, in addition to technical factors. The research question “How can a socio-technical approach improve BYOD security management in hospitals?” was addressed using Mixed Method Action Research (MMAR), a form of action research where an iterative mechanism was used to synergistically integrate results from multiple studies to answer the research question. First, a literature review identified prominent hospital BYOD security risks and produced a preliminary hospital BYOD (hBYOD) security framework, consisting of guidelines for secure hospital BYOD use. Second, IT management stakeholders and BYOD clinical users were surveyed and interviewed to understand BYOD security management practices employed by Australian hospitals and the clinicians’ preferences and security behaviour with respect to BYOD use respectively. Third, all findings were synthesised and merged through the MMAR approach to refine the hBYOD framework in the light of evidence gathered. Finally, recommendatory guidelines provided by the framework were mapped to a newly formed hospital BYOD security maturity model to streamline their implementation and a pilot implementation study in a major hospital tested the utility of this model. This thesis makes a significant contribution by enabling improvements in hospital data security. It provides comprehensive guidance across the BYOD security lifecycle, allowing evaluation and improvements in hospital BYOD socio-technical security practices through the hospital BYOD security framework and maturity model. It can therefore benefit hospital policymakers, technologists, and clinical stakeholder representatives through informed decision-making and BYOD strategy development. Furthermore, the thesis elucidates how alignment between cultures of clinical productivity and data security may be achieved through the application of socio-technical theory. It also demonstrates the value of participatory and collaborative methods for guideline development in healthcare cybersecurity.
  • Item
    No Preview Available
    Reflected Reality: Augmented Reality Interaction with Mirror Reflections
    Zhou, Qiushi ( 2023-11)
    Mirror reflections enable a compelling visuomotor experience that allows people to simultaneously embody two spaces: through the physical body in front of the mirror and through the reflected body in the illusory space behind the mirror. This experience offers unique affordances for Augmented Reality (AR) interaction that leverages the natural human perception of the relationship between the two bodies. This thesis explores possibilities of AR interaction with mirror reflections through unpacking and investigating this relationship. Through a systematic literature review of Extended Reality interaction that is not from the first-person perspective (1PP), we identify opportunities for novel AR interaction techniques from second-person perspective (2PP) using the reflected body in the mirror (Article I). Following this, we contribute Reflected Reality: a design space for AR interaction with mirror reflections that covers interaction from different perspectives (1PP/2PP), using different spatial frames of reference (egocentric/allocentric), and under different perceptions of the use of the space in the mirror (as reflection/extension of the physical space) (Article II). Previous work and the evaluation results of reflected reality interaction suggest that most of its novel interaction affordances revolve around the physical and the reflected bodies in the egocentric spaces. Following this observation, we conduct two empirical studies to investigate how users perceive virtual object locations around their physical bodies through a target acquisition task (Article III), and to understand how users can perform bodily interaction using their reflected bodies in the mirror through a movement acquisition task following a virtual instructor (Article IV). Together, results from these studies provide a fundamental knowledge base for designing reflected reality interaction in different task scenarios. After investigating the spatial affordance of mirror reflections for AR interaction, this thesis further explores the affordance for embodied perception through the mediation of the reflected user. Intuiting from results of Article IV, we conduct a systematic review of dance and choreography in HCI that reveals opportunities for using AR with mirror reflections to mediate the integration of the visual presentation and kinaesthetic sensation of body movement (Article V). We present the findings and discussions from a series of workshops on dance improvisation with a prototype AR mirror, which reveals the affordance of a multi-layered embodied presence across the mirror perceived by dancers (Article VI). We conclude this thesis with a discussion that summarises the knowledge gained from the empirical studies, elucidates the implications of the design space and novel interaction techniques, and illuminates future research directions inspired by its empirical and theoretical implications.