Overflow Prevention Enhances Long-Context Recurrent LLMs
Overflow Prevention Enhances Long-Context Recurrent LLMs
A recent trend in LLMs is developing recurrent sub-quadratic models that improve long-context processing efficiency. We investigate leading large long-context models, focusing on how their fixed-size recurrent memory affects their performance. Our experiments reveal that, even when these models are trained for extended contexts, their use of long contexts remains underutilized. Specifically, we demonstrate that a chunk-based inference procedure, which identifies and processes only the most relevant portion of the input can mitigate recurrent memory failures and be effective for many long-context tasks: On LongBench, our method improves the overall performance of Falcon3-Mamba-Inst-7B by 14%, Falcon-Mamba-Inst-7B by 28%, RecurrentGemma-IT-9B by 50%, and RWKV6-Finch-7B by 51%. Surprisingly, this simple approach also leads to state-of-the-art results in the challenging LongBench v2 benchmark, showing competitive performance with equivalent size Transformers. Furthermore, our findings raise questions about whether recurrent models genuinely exploit long-range dependencies, as our single-chunk strategy delivers stronger performance - even in tasks that presumably require cross-context relations.
Assaf Ben-Kish、Itamar Zimerman、M. Jehanzeb Mirza、James Glass、Leonid Karlinsky、Raja Giryes
计算技术、计算机技术
Assaf Ben-Kish,Itamar Zimerman,M. Jehanzeb Mirza,James Glass,Leonid Karlinsky,Raja Giryes.Overflow Prevention Enhances Long-Context Recurrent LLMs[EB/OL].(2025-05-12)[2025-06-24].https://arxiv.org/abs/2505.07793.点此复制
评论