Abstract
The use of Large Language Models (LLMs) in mental health highlights the need to understand their responses to emotional content. Previous research shows that emotion-inducing prompts can elevate “anxiety” in LLMs, affecting behavior and amplifying biases. Here, we found that traumatic narratives increased Chat-GPT-4’s reported anxiety while mindfulness-based exercises reduced it, though not to baseline. These findings suggest managing LLMs’ “emotional states” can foster safer and more ethical human-AI interactions.
| Original language | English |
|---|---|
| Article number | 132 |
| Journal | npj Digital Medicine |
| Volume | 8 |
| Issue number | 1 |
| DOIs | |
| State | Published - Dec 2025 |
Bibliographical note
Publisher Copyright:© The Author(s) 2025.
ASJC Scopus subject areas
- Medicine (miscellaneous)
- Health Informatics
- Computer Science Applications
- Health Information Management