The Problem of Atypicality in LLM-Powered Psychiatry
The Problem of Atypicality in LLM-Powered Psychiatry
Large language models (LLMs) are increasingly proposed as scalable solutions to the global mental health crisis. But their deployment in psychiatric contexts raises a distinctive ethical concern: the problem of atypicality. Because LLMs generate outputs based on population-level statistical regularities, their responses -- while typically appropriate for general users -- may be dangerously inappropriate when interpreted by psychiatric patients, who often exhibit atypical cognitive or interpretive patterns. We argue that standard mitigation strategies, such as prompt engineering or fine-tuning, are insufficient to resolve this structural risk. Instead, we propose dynamic contextual certification (DCC): a staged, reversible and context-sensitive framework for deploying LLMs in psychiatry, inspired by clinical translation and dynamic safety models from artificial intelligence governance. DCC reframes chatbot deployment as an ongoing epistemic and ethical process that prioritises interpretive safety over static performance benchmarks. Atypicality, we argue, cannot be eliminated -- but it can, and must, be proactively managed.
Bosco Garcia、Eugene Y. S. Chua、Harman Singh Brah
神经病学、精神病学
Bosco Garcia,Eugene Y. S. Chua,Harman Singh Brah.The Problem of Atypicality in LLM-Powered Psychiatry[EB/OL].(2025-08-08)[2025-08-24].https://arxiv.org/abs/2508.06479.点此复制
评论