Computing and Information Systems - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 32
  • Item
    No Preview Available
    Directive Explanations for Actionable Explainability in Machine Learning Applications
    Singh, R ; Miller, T ; Lyons, H ; Sonenberg, L ; Velloso, E ; Vetere, F ; Howe, P ; Dourish, P (ASSOC COMPUTING MACHINERY, 2023-12)
    In this article, we show that explanations of decisions made by machine learning systems can be improved by not only explaining why a decision was made but also explaining how an individual could obtain their desired outcome. We formally define the concept of directive explanations (those that offer specific actions an individual could take to achieve their desired outcome), introduce two forms of directive explanations (directive-specific and directive-generic), and describe how these can be generated computationally. We investigate people’s preference for and perception toward directive explanations through two online studies, one quantitative and the other qualitative, each covering two domains (the credit scoring domain and the employee satisfaction domain). We find a significant preference for both forms of directive explanations compared to non-directive counterfactual explanations. However, we also find that preferences are affected by many aspects, including individual preferences and social factors. We conclude that deciding what type of explanation to provide requires information about the recipients and other contextual information. This reinforces the need for a human-centered and context-specific approach to explainable AI.
  • Item
    No Preview Available
    Algorithmic Decisions, Desire for Control, and the Preference for Human Review over Algorithmic Review
    Lyons, H ; Miller, T ; Velloso, E (ASSOC COMPUTING MACHINERY, 2023)
  • Item
  • Item
    No Preview Available
    Increasing the Value of XAI for Users: A Psychological Perspective
    Hoffman, RR ; Miller, T ; Klein, G ; Mueller, ST ; Clancey, WJ (Springer Science and Business Media LLC, 2023-01-01)
    Abstract This paper summarizes the psychological insights and related design challenges that have emerged in the field of Explainable AI (XAI). This summary is organized as a set of principles, some of which have recently been instantiated in XAI research. The primary aspects of implementation to which the principles refer are the design and evaluation stages of XAI system development, that is, principles concerning the design of explanations and the design of experiments for evaluating the performance of XAI systems. The principles can serve as guidance, to ensure that AI systems are human-centered and effectively assist people in solving difficult problems.
  • Item
    No Preview Available
    What's the Appeal? Perceptions of Review Processes for Algorithmic Decisions
    Lyons, H ; Wijenayake, S ; Miller, T ; Velloso, E (ASSOC COMPUTING MACHINERY, 2022)
  • Item
    Thumbnail Image
    Observing multiplayer boardgame play at a distance
    Rogerson, MJ ; Newn, J ; Singh, R ; Baillie, E ; Papasimeon, M ; Benke, L ; Miller, T (Association for Computing Machinery, 2021-10-15)
    More than 18 months after it was first identified, the COVID-19 pandemic continues to restrict researchers' opportunities to conduct research in face-to-face settings. This affects studies requiring participants to be co-located, such as those that examine the play of multiplayer boardgames. We present two methods for observing the play of boardgames at a distance, supported by two case studies. We report on the value and use of both methods, and reflect on five core concepts that we observed during the studies: data collection and analysis, recruitment and participation, the temporality of play, the sociality of play and material engagement, and the researcher's role in the study. This work highlights the different considerations that online studies generate when compared to in-person play and other study methods. Future work will present an in-depth discussion of the findings of these studies and present recommendations for the adoption of these distinct methods.
  • Item
    No Preview Available
    Goal recognition for deceptive human agents through planning and gaze
    Le, T ; Singh, R ; Miller, T (AI Access Foundation, 2021-01-01)
    Eye gaze has the potential to provide insight into the minds of individuals, and this idea has been used in prior research to improve human goal recognition by combining human's actions and gaze. However, most existing research assumes that people are rational and honest. In adversarial scenarios, people may deliberately alter their actions and gaze, which presents a challenge to goal recognition systems. In this paper, we present new models for goal recognition under deception using a combination of gaze behaviour and observed movements of the agent. These models aim to detect when a person is deceiving by analysing their gaze patterns and use this information to adjust the goal recognition. We evaluated our models in two human-subject studies: (1) using data collected from 30 individuals playing a navigation game inspired by an existing deception study and (2) using data collected from 40 individuals playing a competitive game (Ticket To Ride). We found that one of our models (Modulated Deception Gaze+Ontic) offers promising results compared to the previous state-of-the-art model in both studies. Our work complements existing adversarial goal recognition systems by equipping these systems with the ability to tackle ambiguous gaze behaviours.
  • Item
    Thumbnail Image
    Emotionalism Within People-Oriented Software Design
    Sherkat, M ; Miller, T ; Mendoza, A ; Burrows, R (FRONTIERS MEDIA SA, 2021-11-22)
    In designing most software applications, much effort is placed upon the functional goals, making a software system useful. However, the failure to consider emotional goals, which make a software system pleasurable to use, can result in disappointment and system rejection even if utilitarian goals are well implemented. Although several studies have emphasised the importance of people’s emotional goals in developing software, there is little advice on how to address these goals in the software system development process. This paper bridges the gap between emotional goals elicitation and the software system design process by proposing a novel technique entitled the Emotional Goal Systematic Analysis Technique (EG-SAT) to systematically analyse people’s emotional goals in cooperation with functional and quality goals. EG-SAT allows in-depth analysis of emotional goals to build a software system and provides a visual notation for representing the analysis, facilitating communication and documentation. EG-SAT provides traceability of emotional goals in system design by connecting the emotional goals to functional and quality goals. To demonstrate the method in use, a two-part evaluation is conducted. First, EG-SAT is used to analyse the emotional goals of potential users of a mobile learning application that provides information about low carbon living for tradespeople and professionals in the building industry in Australia. The results of using EG-SAT in this case study are compared with a professionally developed baseline. Second, we ran a semi-controlled experiment in which 12 participants were asked to apply EG-SAT and another technique to our case study. The outcomes show that EG-SAT helped participants analyse emotional goals and gain valuable insights about the functional and non-functional goals for addressing people’s emotional goals. The key novelty of the EG-SAT is in proposing an easy to learn and easy to use technique that helps system analysts gain insights on how to address people’s emotional goals. Furthermore, the EG-SAT enables system analysts to convert emotional goals to traditional functional and non-functional goals that existing software engineering methodologies can analyse without demanding excessive effort.
  • Item
    No Preview Available
    'Knowing Whether' in Proper Epistemic Knowledge Bases
    Miller, T ; Felli, P ; Muise, C ; Pearce, AR ; Sonenberg, L (AAAI Press, 2016)
    Proper epistemic knowledge bases (PEKBs) are syntactic knowledge bases that use multi-agent epistemic logic to represent nested multi-agent knowledge and belief. PEKBs have certain syntactic restrictions that lead to desirable computational properties; primarily, a PEKB is a conjunction of modal literals, and therefore contains no disjunction. Sound entailment can be checked in polynomial time, and is complete for a large set of arbitrary formulae in logics Kn and KDn. In this paper, we extend PEKBs to deal with a restricted form of disjunction: 'knowing whether.' An agent i knows whether Q iff agent i knows Q or knows not Q; that is, []Q or []not(Q). In our experience, the ability to represent that an agent knows whether something holds is useful in many multi-agent domains. We represent knowing whether with a modal operator, and present sound polynomial-time entailment algorithms on PEKBs with the knowing whether operator in Kn and KDn, but which are complete for a smaller class of queries than standard PEKBs.
  • Item
    No Preview Available
    Planning for a Single Agent in a Multi-Agent Environment Using FOND
    Muise, C ; Felli, P ; Miller, T ; Pearce, AR ; Sonenberg, L ; Kambhampati, S (AAAI Press, 2016)
    Single-agent planning in a multi-agent environment is challenging because the actions of other agents can affect our ability to achieve a goal. From a given agent's perspective, actions of others can be viewed as non-deterministic outcomes of that agent's actions. While simple conceptually, this interpretation of planning in a multi-agent environment as non-deterministic planning remains challenging, not only due to the non-determinism resulting from others' actions, but because it is not clear how to compactly model the possible actions of others in the environment. In this paper, we cast the problem of planning in a multiagent environment as one of Fully-Observable Non-Deterministic (FOND) planning. We extend a non-deterministic planner to plan in a multi-agent setting, allowing non-deterministic planning technology to solve a new class of planning problems. To improve the efficiency in domains too large for solving optimally, we propose a technique to use the goals and possible actions of other agents to focus the search on a set of plausible actions. We evaluate our approach on existing and new multiagent benchmarks, demonstrating that modelling the other agents' goals improves the quality of the resulting solutions.