Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations
Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations
Large language models (LLMs) frequently generate hallucinations-content that deviates from factual accuracy or provided context-posing challenges for diagnosis due to the complex interplay of underlying causes. This paper introduces a subsequence association framework to systematically trace and understand hallucinations. Our key insight is that hallucinations arise when dominant hallucinatory associations outweigh faithful ones. Through theoretical and empirical analyses, we demonstrate that decoder-only transformers effectively function as subsequence embedding models, with linear layers encoding input-output associations. We propose a tracing algorithm that identifies causal subsequences by analyzing hallucination probabilities across randomized input contexts. Experiments show our method outperforms standard attribution techniques in identifying hallucination causes and aligns with evidence from the model's training corpus. This work provides a unified perspective on hallucinations and a robust framework for their tracing and analysis.
Yiyou Sun、Yu Gai、Lijie Chen、Abhilasha Ravichander、Yejin Choi、Dawn Song
计算技术、计算机技术语言学
Yiyou Sun,Yu Gai,Lijie Chen,Abhilasha Ravichander,Yejin Choi,Dawn Song.Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations[EB/OL].(2025-04-17)[2025-06-10].https://arxiv.org/abs/2504.12691.点此复制
评论