Large Language Models for Depression Recognition in Spoken Language Integrating Psychological Knowledge
Large Language Models for Depression Recognition in Spoken Language Integrating Psychological Knowledge
Depression is a growing concern gaining attention in both public discourse and AI research. While deep neural networks (DNNs) have been used for recognition, they still lack real-world effectiveness. Large language models (LLMs) show strong potential but require domain-specific fine-tuning and struggle with non-textual cues. Since depression is often expressed through vocal tone and behaviour rather than explicit text, relying on language alone is insufficient. Diagnostic accuracy also suffers without incorporating psychological expertise. To address these limitations, we present, to the best of our knowledge, the first application of LLMs to multimodal depression detection using the DAIC-WOZ dataset. We extract the audio features using the pre-trained model Wav2Vec, and mapped it to text-based LLMs for further processing. We also propose a novel strategy for incorporating psychological knowledge into LLMs to enhance diagnostic performance, specifically using a question and answer set to grant authorised knowledge to LLMs. Our approach yields a notable improvement in both Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) compared to a base score proposed by the related original paper. The codes are available at https://github.com/myxp-lyp/Depression-detection.git
Yupei Li、Shuaijie Shao、Manuel Milling、Bj?rn W. Schuller
医学现状、医学发展计算技术、计算机技术
Yupei Li,Shuaijie Shao,Manuel Milling,Bj?rn W. Schuller.Large Language Models for Depression Recognition in Spoken Language Integrating Psychological Knowledge[EB/OL].(2025-05-28)[2025-06-27].https://arxiv.org/abs/2505.22863.点此复制
评论