|国家预印本平台
首页|When Truth Is Overridden: Uncovering the Internal Origins of Sycophancy in Large Language Models

When Truth Is Overridden: Uncovering the Internal Origins of Sycophancy in Large Language Models

When Truth Is Overridden: Uncovering the Internal Origins of Sycophancy in Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) often exhibit sycophantic behavior, agreeing with user-stated opinions even when those contradict factual knowledge. While prior work has documented this tendency, the internal mechanisms that enable such behavior remain poorly understood. In this paper, we provide a mechanistic account of how sycophancy arises within LLMs. We first systematically study how user opinions induce sycophancy across different model families. We find that simple opinion statements reliably induce sycophancy, whereas user expertise framing has a negligible impact. Through logit-lens analysis and causal activation patching, we identify a two-stage emergence of sycophancy: (1) a late-layer output preference shift and (2) deeper representational divergence. We also verify that user authority fails to influence behavior because models do not encode it internally. In addition, we examine how grammatical perspective affects sycophantic behavior, finding that first-person prompts (``I believe...'') consistently induce higher sycophancy rates than third-person framings (``They believe...'') by creating stronger representational perturbations in deeper layers. These findings highlight that sycophancy is not a surface-level artifact but emerges from a structural override of learned knowledge in deeper layers, with implications for alignment and truthful AI systems.

Keyu Wang、Jin Li、Shu Yang、Zhuoran Zhang、Di Wang

计算技术、计算机技术

Keyu Wang,Jin Li,Shu Yang,Zhuoran Zhang,Di Wang.When Truth Is Overridden: Uncovering the Internal Origins of Sycophancy in Large Language Models[EB/OL].(2025-08-05)[2025-08-19].https://arxiv.org/abs/2508.02087.点此复制

评论