AI, Radical Ignorance, and the Institutional Approach to Consent

Research output: Contribution to journalArticlepeer-review

Abstract

More and more, we face AI-based products and services. Using these services often requires our explicit consent, e.g., by agreeing to the services’ Terms and Conditions clause. Current advances introduce the ability of AI to evolve and change its own modus operandi over time in such a way that we cannot know, at the moment of consent, what it is in the future to which we are now agreeing. Therefore, informed consent is impossible regarding certain kinds of AI. Call this the problem of radical ignorance. Interestingly, radical ignorance exists in consent contexts other than AI, where it seems that individuals can provide informed consent. The article argues that radical ignorance can undermine informed consent in some contexts but not others because, under certain institutional, autonomy-protecting conditions, consent can be valid without being (perfectly) informed. By understanding these institutional conditions, we can formulate practical solutions to foster valid, albeit imperfectly informed consent across various decision contexts and within different institutions.

Original languageEnglish
Article number101
JournalPhilosophy and Technology
Volume37
Issue number3
DOIs
StatePublished - Sep 2024
Externally publishedYes

Bibliographical note

Publisher Copyright:
© The Author(s) 2024.

Keywords

  • Artificial intelligence
  • Autonomy
  • Informed consent
  • Institutions
  • Radical ignorance

ASJC Scopus subject areas

  • Philosophy
  • History and Philosophy of Science

Fingerprint

Dive into the research topics of 'AI, Radical Ignorance, and the Institutional Approach to Consent'. Together they form a unique fingerprint.

Cite this