Contesting algorithms: Restoring the public interest in content filtering by artificial intelligence

Research output: Contribution to journalArticlepeer-review

Abstract

In recent years, artificial intelligence has been deployed by online platforms to prevent the upload of allegedly illegal content or to remove unwarranted expressions. These systems are trained to spot objectionable content and to remove it, block it, or filter it out before it is even uploaded. Artificial intelligence filters offer a robust approach to content moderation which is shaping the public sphere. This dramatic shift in norm setting and law enforcement is potentially game-changing for democracy. Artificial intelligence filters carry censorial power, which could bypass traditional checks and balances secured by law. Their opaque and dynamic nature creates barriers to oversight, and conceals critical value choices and tradeoffs. Currently, we lack adequate tools to hold them accountable. This paper seeks to address this gap by introducing an adversarial procedure— – Contesting Algorithms. It proposes to deliberately introduce friction into the dominant removal systems governed by artificial intelligence. Algorithmic content moderation often seeks to optimize a single goal, such as removing copyright-infringing materials or blocking hate speech, while other values in the public interest, such as fair use or free speech, are often neglected. Contesting algorithms introduce an adversarial design which reflects conflicting values, and thereby may offer a check on dominant removal systems. Facilitating an adversarial intervention may promote democratic principles by keeping society in the loop. An adversarial public artificial intelligence system could enhance dynamic transparency, facilitate an alternative public articulation of social values using machine learning systems, and restore societal power to deliberate and determine social tradeoffs.

Original languageEnglish
JournalBig Data and Society
Volume7
Issue number2
DOIs
StatePublished - Jul 2020

Bibliographical note

Funding Information:
I thank Yochai Benkler, Michael Birnhack, Michal Gal, Ellen Goodman, Seda G?rses, Maayan Perel, Hellen Nissenbaum, and Moran Yemini for excellent comments and suggestions. I also thank the participants of TILTing Perspectives 2019 and the research seminars at the Berkman Klein Center for Internet and Society at Harvard University, Cornell Tech Digital Life Initiative, the Weizebaum Institute, and the Edmond Safra Center for Ethics, Tel-Aviv University, for great conversations.

Funding Information:
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was funded by the Israel Science Foundation (grant 1820/17).

Publisher Copyright:
© The Author(s) 2020.

Keywords

  • accountability
  • artificial intelligence
  • Content moderation
  • copyright
  • democracy
  • rule of law

ASJC Scopus subject areas

  • Information Systems
  • Communication
  • Computer Science Applications
  • Information Systems and Management
  • Library and Information Sciences

Fingerprint

Dive into the research topics of 'Contesting algorithms: Restoring the public interest in content filtering by artificial intelligence'. Together they form a unique fingerprint.

Cite this