Abstract
This study analyzed an AI chatbot’s perspectives on adolescent sexting through quantitative questionnaire responses and a qualitative conversational interview. Findings revealed problematic biases stemming from limitations in training data and algorithms. The chatbot showed an imbalanced focus on sexting’s risks compared to benefits for healthy sexual development. Responses frequently associated teen sexting with criminal offenses like child pornography, reflecting legalistic age bias. Gender bias also emerged in framing sexting as far riskier for girls versus boys. Additionally, the chatbot demonstrated gaps applying nuanced consent principles attuned to complex teen relationship dynamics. While covering consent fundamentals, responses lacked an understanding of how initial willingness differs from non-consensual sharing. The study suggests training data limitations skewed the chatbot’s framing away from empowering adolescent sexual agency. Opportunities exist to mitigate biases through improving training data balance, incorporating community input, expanding consent frameworks and encouraging nuanced risk/benefit analysis. More inclusive design with human oversight is vital for chatbots to provide comprehensive, empowering sexuality education to youth. This analysis of AI perspectives reveals the persistence of conservative attitudes and the need to evolve systems to support healthy adolescent sexual development.
Original language | English |
---|---|
Journal | American Journal of Sexuality Education |
DOIs | |
State | Accepted/In press - 2024 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2024 The Author(s). Published with license by Taylor & Francis Group, LLC.
Keywords
- Conservative AI attitudes
- sexual education
- teen sexting
ASJC Scopus subject areas
- Education