|国家预印本平台
首页|Answer-Centric or Reasoning-Driven? Uncovering the Latent Memory Anchor in LLMs

Answer-Centric or Reasoning-Driven? Uncovering the Latent Memory Anchor in LLMs

Answer-Centric or Reasoning-Driven? Uncovering the Latent Memory Anchor in LLMs

来源:Arxiv_logoArxiv
英文摘要

While Large Language Models (LLMs) demonstrate impressive reasoning capabilities, growing evidence suggests much of their success stems from memorized answer-reasoning patterns rather than genuine inference. In this work, we investigate a central question: are LLMs primarily anchored to final answers or to the textual pattern of reasoning chains? We propose a five-level answer-visibility prompt framework that systematically manipulates answer cues and probes model behavior through indirect, behavioral analysis. Experiments across state-of-the-art LLMs reveal a strong and consistent reliance on explicit answers. The performance drops by 26.90\% when answer cues are masked, even with complete reasoning chains. These findings suggest that much of the reasoning exhibited by LLMs may reflect post-hoc rationalization rather than true inference, calling into question their inferential depth. Our study uncovers the answer-anchoring phenomenon with rigorous empirical validation and underscores the need for a more nuanced understanding of what constitutes reasoning in LLMs.

Yang Wu、Yifan Zhang、Yiwei Wang、Yujun Cai、Yurong Wu、Yuran Wang、Ning Xu、Jian Cheng

计算技术、计算机技术

Yang Wu,Yifan Zhang,Yiwei Wang,Yujun Cai,Yurong Wu,Yuran Wang,Ning Xu,Jian Cheng.Answer-Centric or Reasoning-Driven? Uncovering the Latent Memory Anchor in LLMs[EB/OL].(2025-06-21)[2025-07-02].https://arxiv.org/abs/2506.17630.点此复制

评论