Building Multilingual Datasets for Predicting Mental Health Severity through LLMs: Prospects and Challenges
Building Multilingual Datasets for Predicting Mental Health Severity through LLMs: Prospects and Challenges
Large Language Models (LLMs) are increasingly being integrated into various medical fields, including mental health support systems. However, there is a gap in research regarding the effectiveness of LLMs in non-English mental health support applications. To address this problem, we present a novel multilingual adaptation of widely-used mental health datasets, translated from English into six languages (e.g., Greek, Turkish, French, Portuguese, German, and Finnish). This dataset enables a comprehensive evaluation of LLM performance in detecting mental health conditions and assessing their severity across multiple languages. By experimenting with GPT and Llama, we observe considerable variability in performance across languages, despite being evaluated on the same translated dataset. This inconsistency underscores the complexities inherent in multilingual mental health support, where language-specific nuances and mental health data coverage can affect the accuracy of the models. Through comprehensive error analysis, we emphasize the risks of relying exclusively on LLMs in medical settings (e.g., their potential to contribute to misdiagnoses). Moreover, our proposed approach offers significant cost savings for multilingual tasks, presenting a major advantage for broad-scale implementation.
A. Seza Do?ru?z、John Pavlopoulos、Konstantinos Skianis
神经病学、精神病学常用外国语乌拉尔语系(芬兰-乌戈尔语系)
A. Seza Do?ru?z,John Pavlopoulos,Konstantinos Skianis.Building Multilingual Datasets for Predicting Mental Health Severity through LLMs: Prospects and Challenges[EB/OL].(2024-09-25)[2025-06-08].https://arxiv.org/abs/2409.17397.点此复制
评论