Over the past years, there has been an increasing concern regarding the risk of bias and discrimination in algorithmic systems, which received significant attention amongst the research communities. To ensure the system's fairness, various methods and techniques have been developed to assess and mitigate potential biases. Such methods, also known as "Formal Fairness", look at various aspects of the system's advanced reasoning mechanism and outcomes, with techniques ranging from local explanations (at feature level) to visual explanations (saliency maps). Another aspect, equally important, represents the perception of the users regarding the system's fairness. Despite a decision system being provably "Fair", if the users find it difficult to understand how the decisions were made, they will refrain from trusting, accepting, and ultimately using the system altogether. This raised the issue of "Perceived Fairness"which looks at means to reassure users of a system's trustworthiness. In that sense, providing users with some form of explanation on why and how certain outcomes resulted, is highly relevant, especially nowadays as the reasoning mechanisms increase in complexity and computational power. Recent studies suggest a plethora of explanation types. The current work aims to review the recent progress in explaining systems' reasoning and outcome, categorize and present it as a reference for the state-of-the-art fairness-related explanations review.
|Title of host publication||UMAP 2021 - Adjunct Publication of the 29th ACM Conference on User Modeling, Adaptation and Personalization|
|Publisher||Association for Computing Machinery, Inc|
|Number of pages||11|
|State||Published - 21 Jun 2021|
|Event||29th ACM Conference on User Modeling, Adaptation and Personalization, UMAP 2021 - Virtual, Online, Netherlands|
Duration: 21 Jun 2020 → 25 Jun 2020
|Name||UMAP 2021 - Adjunct Publication of the 29th ACM Conference on User Modeling, Adaptation and Personalization|
|Conference||29th ACM Conference on User Modeling, Adaptation and Personalization, UMAP 2021|
|Period||21/06/20 → 25/06/20|
Bibliographical noteFunding Information:
This study was supported by the Cyprus Center for Algorithmic Transparency, which has received funding from the European Union’s Horizon 2020 Research and Innovation Program under Grant Agreement No. 810105 (CyCAT – Call: H2020-WIDESPREAD-05-2017-Twinning).
Ionela Georgiana Mocanu was supported by the Engineering and Physical Sciences Research Council (EPSRC) Centre for Doctoral Training in Pervasive Parallelism (grant EP/L01503X/1) at the University of Edinburgh, School of Informatics.
© 2021 ACM.
- Perceived fairness
- algorithmic transparency
ASJC Scopus subject areas