Abstract
In light of the widespread use of algorithmic (intelligent) systems across numerous domains, there is an increasing awareness about the need to explain their underlying decision-making process and resulting outcomes. Since oftentimes these systems are being considered as black boxes, adding explanations to their outcomes may contribute to the perception of their transparency and, as a result, increase users’ trust and fairness perception towards the system, regardless of its actual fairness, which can be measured using various fairness tests and measurements. Different explanation styles may have a different impact on users’ perception of fairness towards the system and on their understanding of the outcome of the system. Hence, there is a need to understand how various explanation styles may impact non-expert users’ perceptions of fairness and understanding of the system’s outcome. In this study we aimed at fulfilling this need. We performed a between-subject user study in order to examine the effect of various explanation styles on users’ fairness perception and understanding of the outcome. In the experiment we examined four known styles of textual explanations (case-based, demographic-based, input influence-based and sensitivity-based) along with a new style (certification-based) that reflect the results of an auditing process of the system. The results suggest that providing some kind of explanation contributes to users’ understanding of the outcome and that some explanation styles are more beneficial than others. Moreover, while explanations provided by the system are important and can indeed enhance users’ perception of fairness, their perception mainly depends on the outcome of the system. The results may shed light on one of the main problems in explainability of algorithmic systems, which is choosing the best explanation to promote users’ fairness perception towards a particular system, with respect to the outcome of the system. The contribution of this study is reflected in the new and realistic case study that was examined, in the creation and evaluation of a new explanation style that can be used as the link between the actual (computational) fairness of the system and users’ fairness perception and in the need of analyzing and evaluating explanations while taking into account the outcome of the system.
Original language | English |
---|---|
Article number | 2 |
Journal | Ethics and Information Technology |
Volume | 24 |
Issue number | 1 |
DOIs | |
State | Published - Mar 2022 |
Bibliographical note
Publisher Copyright:© 2022, The Author(s), under exclusive licence to Springer Nature B.V.
Keywords
- Algorithmic systems
- Decision support systems
- Explainability
- Fairness
- Users perception
ASJC Scopus subject areas
- Computer Science Applications
- Library and Information Sciences