A Counterfactual Framework for Learning and Evaluating Explanations for Recommender Systems

Oren Barkan, Veronika Bogina, Liya Gurevitch, Yuval Asher, Noam Koenigstein

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


In the field of recommender systems, explainability remains a pivotal yet challenging aspect. To address this, we introduce the Learning to eXplain Recommendations (LXR) framework, a post-hoc, model-agnostic approach designed for providing counterfactual explanations. LXR is compatible with any differentiable recommender algorithm and scores the relevance of user data in relation to recommended items. A distinctive feature of LXR is its use of novel self-supervised counterfactual loss terms, which effectively highlight the most influential user data responsible for a specific recommended item. Additionally, we propose several innovative counterfactual evaluation metrics specifically tailored for assessing the quality of explanations in recommender systems. Our code is available on our GitHub repository: https://github.com/DeltaLabTLV/LXR.

Original languageEnglish
Title of host publicationWWW 2024 - Proceedings of the ACM Web Conference
PublisherAssociation for Computing Machinery, Inc
Number of pages11
ISBN (Electronic)9798400701719
StatePublished - 13 May 2024
Externally publishedYes
Event33rd ACM Web Conference, WWW 2024 - Singapore, Singapore
Duration: 13 May 202417 May 2024

Publication series

NameWWW 2024 - Proceedings of the ACM Web Conference


Conference33rd ACM Web Conference, WWW 2024

Bibliographical note

Publisher Copyright:
© 2024 Owner/Author.


  • attributions
  • counterfactual explanations
  • explainable ai
  • explanation evaluation
  • interpretability
  • recommender systems

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Software


Dive into the research topics of 'A Counterfactual Framework for Learning and Evaluating Explanations for Recommender Systems'. Together they form a unique fingerprint.

Cite this