Abstract
Large-scale, pretrained vision-language models such as OpenAI's CLIP are a game changer in Computer Vision due to their unprecedented gzero-shot' image classification capabilities. As they are pretrained on huge amounts of unsupervised web-scraped data, they suffer from inherent biases reflecting human perceptions, norms and beliefs. This position paper aims to highlight the potential of studying models such as CLIP in the context of human-Animal relationships, in particular for understanding human perceptions and preferences with respect to physical attributes of pets and their adoptability.
Original language | English |
---|---|
Title of host publication | ACI 2022 - 9th International Conference on Animal-Computer Interaction |
Subtitle of host publication | Defining Tomorrow |
Publisher | Association for Computing Machinery |
ISBN (Electronic) | 9781450398312 |
DOIs | |
State | Published - 5 Dec 2022 |
Event | 9th International Conference on Animal-Computer Interaction: Defining Tomorrow, ACI 2022 - Newcastle upon Tyne, United Kingdom Duration: 5 Dec 2022 → 8 Dec 2022 |
Publication series
Name | ACM International Conference Proceeding Series |
---|
Conference
Conference | 9th International Conference on Animal-Computer Interaction: Defining Tomorrow, ACI 2022 |
---|---|
Country/Territory | United Kingdom |
City | Newcastle upon Tyne |
Period | 5/12/22 → 8/12/22 |
Bibliographical note
Publisher Copyright:© 2022 Owner/Author.
Keywords
- animal-Assisted reading
- animal-computer interaction
- app design
- child
- support dog
ASJC Scopus subject areas
- Human-Computer Interaction
- Computer Networks and Communications
- Computer Vision and Pattern Recognition
- Software