Unable to Forget: Proactive lnterference Reveals Working Memory Limits in LLMs Beyond Context Length
Unable to Forget: Proactive lnterference Reveals Working Memory Limits in LLMs Beyond Context Length
Information retrieval in Large Language Models (LLMs) is increasingly recognized as intertwined with generation capabilities rather than mere lookup. While longer contexts are often assumed to improve retrieval, the effects of intra-context interference remain understudied. To address this, we adapt the proactive interference (PI) paradigm from cognitive science, where earlier information disrupts recall of newer updates. In humans, susceptibility to such interference is inversely linked to working memory capacity. We introduce PI-LLM, an evaluation that sequentially streams semantically related key-value updates and queries only the final values. Although these final values are clearly positioned just before the query, LLM retrieval accuracy declines log-linearly toward zero as interference accumulates; errors arise from retrieving previously overwritten values. Attempts to mitigate interference via prompt engineering (e.g., instructing models to ignore earlier input) yield limited success. These findings reveal a fundamental constraint on LLMs' ability to disentangle interference and flexibly manipulate information, suggesting a working memory bottleneck beyond mere context access. This calls for approaches that strengthen models' ability to suppress irrelevant content during retrieval.
Chupei Wang、Jiaqiu Vince Sun
University of VirginiaNew York University
计算技术、计算机技术
Chupei Wang,Jiaqiu Vince Sun.Unable to Forget: Proactive lnterference Reveals Working Memory Limits in LLMs Beyond Context Length[EB/OL].(2025-06-09)[2025-06-29].https://arxiv.org/abs/2506.08184.点此复制
评论