Who Made That Decision and Why? Users’ Perceptions of Human Versus AI Decision-Making and the Power of Explainable-AI

Research output: Contribution to journalArticlepeer-review

Abstract

With the advent of artificial intelligence (AI) based systems, a new era has begun. Decisions that were once made by humans are now increasingly being made by these advanced systems, with the inevitable consequence of our growing reliance on AI in many aspects of our lives. At the same time, the opaque nature of AI-based systems and the possibility of unintentional or hidden discriminatory practices and biases raises profound questions not only about the mechanics of AI, but also about how users perceive the fairness of these systems. We hypothesize that providing various explanations for AI decision-making processes and output may enhance users’ fairness perceptions and make them trust the system and adopt its decisions. Hence, we devised an online between-subject experiment that explores users’ fairness and comprehension perceptions of AI systems with respect to the explanations provided by the system, employing a case study of a managerial decision in the human resources (HR) domain. We manipulated (i) the decision-maker (AI or human); (ii) the input (candidate characteristics); (iii) the output (recommendation valence), and (iv) the explanation style. We examined the effect of the various manipulations (and individuals’ demographic and personality characteristics) using multivariate ordinal regression. We also performed a multi-level analysis of experiment components to examine the effects of the decision-maker type, explanation style, and their combination. The results suggest three main conclusions. The first conclusion is that there is a gap in users’ fairness and comprehension perception of AI-based decision making systems compared to human decision making. The second conclusion is that knowing that an AI-based system provided the decisions negatively affects users’ fairness and comprehension perceptions, compared to knowing that humans made the decision. Finally, the third conclusion is that providing case-based, certification-based, or sensitivity-based explanations can narrow this gap and may even eliminate it. Additionally, we found that users’ fairness and comprehension perceptions are influenced by a variety of factors such as the input, output, and explanation provided by the system, as well as by individuals’ age, education, computer skills, and personality. Our findings may help to understand when and how to use explanations to improve users’ perceptions regarding AI-based decision-making. CCS CONCEPTS • Human computer interaction (HCI) → HCI design and evaluation methods → User studies • Human-centered computing → Human computer interaction (HCI) → Empirical studies in HCI • Applied computing → Law, social and behavioral sciences → Sociology.

Original languageEnglish
JournalInternational Journal of Human-Computer Interaction
DOIs
StateAccepted/In press - 2024

Bibliographical note

Publisher Copyright:
© 2024 The Author(s). Published with license by Taylor & Francis Group, LLC.

Keywords

  • Fairness
  • XAI
  • behavioral economics
  • decision making processes
  • decision making systems
  • explainability
  • intelligent systems
  • users’ perceptions

ASJC Scopus subject areas

  • Human Factors and Ergonomics
  • Human-Computer Interaction
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Who Made That Decision and Why? Users’ Perceptions of Human Versus AI Decision-Making and the Power of Explainable-AI'. Together they form a unique fingerprint.

Cite this