Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 1 of 1
  • Item
    Thumbnail Image
    The Role of Explanations in Enhancing Algorithmic Fairness Perceptions
    Afrashteh, Sadaf ( 2021)
    Decision-makers employ machine learning algorithms for decision-making to gain insights from data and make better decisions. More specifically, advanced algorithms can help organizations classify their customers and predict their behavior at the highest accuracy levels. However, the opaque nature of those algorithms has raised concerns around some potential unintended consequences they might cause, unfair decisions at the center. Unfair decisions negatively impact on both organizations and customers. Customers may lose their trust in organizations as being treated unfairly and consequently, organizations’ reputations might be put at risk. Transparency provision has been introduced to organizations as a way of addressing the issue of algorithmic opacity. One approach for transparency provision is explaining algorithms’ performance and how they reach a decision to decision-makers. Therefore, explanations can consequently influence the fairness perceptions of the decision-makers about algorithms’ decisions. Understanding how explanations and the way of discoursing them impact on the fairness perceptions of the organizational decision-makers is important. However, little research has focused on the role of explanations in enhancing fairness perceptions. I seek to address this research gap answering the question of: “How does explanation influence decision-makers’ perceptions of fairness connected with an algorithm’s decisions?” I conduct three studies to answer this question. In study 1, I conduct a conceptual study to explore the dimensions of explanations that need to be studied in understanding the impact of explanations on fairness perceptions. In study 2, I develop a research model hypothesizing the role of perspective-taking in discoursing two different explanations with decision-makers in their fairness perceptions. I conducted a 2*2 experiment to test the hypotheses. In study 3, I develop a research model hypothesizing the influence of explanations restrictiveness in the decision’s fairness perceived by the decision-makers. I conducted a 2*2 experiment to test the hypotheses. The findings of this research result in three important insights about explanations and their role in enhancing algorithmic fairness perceptions; first, I propose four dimensions of explanations that need to be considered in understanding fairness perceptions including the content types of explanations, the explanations reasoning logic, the scope of explanations and explanations discourse. Second, taking different perspectives of organization or customer in communicating different types of explanations lead to different impact on the perception of fairness about algorithm’s performance and its made decision. Third, Framing explanations in a less restrictive way creates the space for the decision-makers to be cognitively more engaged with the algorithmic decision-making and practice their own judgment about that which consequently influences on their fairness perceptions.