Reason to Rote: Rethinking Memorization in Reasoning
Reason to Rote: Rethinking Memorization in Reasoning
Large language models readily memorize arbitrary training instances, such as label noise, yet they perform strikingly well on reasoning tasks. In this work, we investigate how language models memorize label noise, and why such memorization in many cases does not heavily affect generalizable reasoning capabilities. Using two controllable synthetic reasoning datasets with noisy labels, four-digit addition (FDA) and two-hop relational reasoning (THR), we discover a reliance of memorization on generalizable reasoning mechanisms: models continue to compute intermediate reasoning outputs even when retrieving memorized noisy labels, and intervening reasoning adversely affects memorization. We further show that memorization operates through distributed encoding, i.e., aggregating various inputs and intermediate results, rather than building a look-up mechanism from inputs to noisy labels. Moreover, our FDA case study reveals memorization occurs via outlier heuristics, where existing neuron activation patterns are slightly shifted to fit noisy labels. Together, our findings suggest that memorization of label noise in language models builds on, rather than overrides, the underlying reasoning mechanisms, shedding lights on the intriguing phenomenon of benign memorization.
Yupei Du、Philipp Mondorf、Silvia Casola、Yuekun Yao、Robert Litschko、Barbara Plank
计算技术、计算机技术
Yupei Du,Philipp Mondorf,Silvia Casola,Yuekun Yao,Robert Litschko,Barbara Plank.Reason to Rote: Rethinking Memorization in Reasoning[EB/OL].(2025-07-07)[2025-07-25].https://arxiv.org/abs/2507.04782.点此复制
评论