"I've talked to ChatGPT about my issues last night.": Examining Mental Health Conversations with Large Language Models through Reddit Analysis
"I've talked to ChatGPT about my issues last night.": Examining Mental Health Conversations with Large Language Models through Reddit Analysis
We investigate the role of large language models (LLMs) in supporting mental health by analyzing Reddit posts and comments about mental health conversations with ChatGPT. Our findings reveal that users value ChatGPT as a safe, non-judgmental space, often favoring it over human support due to its accessibility, availability, and knowledgeable responses. ChatGPT provides a range of support, including actionable advice, emotional support, and validation, while helping users better understand their mental states. Additionally, we found that ChatGPT offers innovative support for individuals facing mental health challenges, such as assistance in navigating difficult conversations, preparing for therapy sessions, and exploring therapeutic interventions. However, users also voiced potential risks, including the spread of incorrect health advice, ChatGPT's overly validating nature, and privacy concerns. We discuss the implications of LLMs as tools for mental health support in both everyday health and clinical therapy settings and suggest strategies to mitigate risks in LLM-powered interactions.
Kyuha Jung、Gyuho Lee、Yuanhui Huang、Yunan Chen
医学研究方法神经病学、精神病学
Kyuha Jung,Gyuho Lee,Yuanhui Huang,Yunan Chen."I've talked to ChatGPT about my issues last night.": Examining Mental Health Conversations with Large Language Models through Reddit Analysis[EB/OL].(2025-04-28)[2025-06-25].https://arxiv.org/abs/2504.20320.点此复制
评论