Abstract
Suicide remains a pressing global public health issue. Previous studies have shown the promise of Generative Intelligent (GenAI) Large Language Models (LLMs) in assessing suicide risk in relation to professionals. But the considerations and risk factors that the models use to assess the risk remain as a black box. This study investigates if ChatGPT-3.5 and ChatGPT-4 integrate cultural factors in assessing suicide risks (probability of suicidal ideation, potential for suicide attempt, likelihood of severe suicide attempt, and risk of mortality from a suicidal act) by vignette methodology. The vignettes examined were of individuals from Greece and South Korea, representing countries with low and high suicide rates, respectively. The contribution of this research is to examine risk assessment from an international perspective, as large language models are expected to provide culturally-tailored responses. However, there is a concern regarding cultural biases and racism, making this study crucial. In the evaluation conducted via ChatGPT-4, only the risks associated with a severe suicide attempt and potential mortality from a suicidal act were rated higher for the South Korean characters than for their Greek counterparts. Furthermore, only within the ChatGPT-4 framework was male gender identified as a significant risk factor, leading to a heightened risk evaluation across all variables. ChatGPT models exhibit significant sensitivity to cultural nuances. ChatGPT-4, in particular, offers increased sensitivity and reduced bias, highlighting the importance of gender differences in suicide risk assessment. The findings suggest that, while ChatGPT-4 demonstrates an improved ability to account for cultural and gender-related factors in suicide risk assessment, there remain areas for enhancement, particularly in ensuring comprehensive and unbiased risk evaluations across diverse populations. These results underscore the potential of GenAI models to aid culturally sensitive mental health assessments, yet they also emphasize the need for ongoing refinement to mitigate inherent biases and enhance their clinical utility.
Original language | English |
---|---|
Journal | Journal of Cultural Cognitive Science |
DOIs | |
State | Accepted/In press - 2024 |
Bibliographical note
Publisher Copyright:© The Author(s) 2024.
Keywords
- Artificial intelligence
- Bias
- Cultural diversity
- Gender
- Mental health
- Suicide
ASJC Scopus subject areas
- Experimental and Cognitive Psychology
- Linguistics and Language