|国家预印本平台
| 注册
首页|When recalling in-context, Transformers are not SSMs

When recalling in-context, Transformers are not SSMs

When recalling in-context, Transformers are not SSMs

来源:Arxiv_logoArxiv
英文摘要

Despite the advantageous subquadratic complexity of modern recurrent deep learning models -- such as state-space models (SSMs) -- recent studies have highlighted their potential shortcomings compared to transformers on reasoning and memorization tasks. In this paper, we dive deeper into one of such benchmarks: associative recall (AR), which has been shown to correlate well with language modeling performance, and inspect in detail the effects of scaling and optimization issues in recently proposed token mixing strategies. We first demonstrate that, unlike standard transformers, the choice of learning rate plays a critical role in the performance of modern recurrent models: an issue that can severely affect reported performance in previous works and suggests further research is needed to stabilize training. Next, we show that recurrent and attention-based models exhibit contrasting benefits when scaling in width as opposed to depth, with attention being notably unable to solve AR when limited to a single layer. We then further inspect 1-layer transformers, revealing that despite their poor performance, their training dynamics surprisingly resemble the formation of induction heads, a phenomenon previously observed only in their 2-layer counterparts. Finally, through architectural ablations, we study how components affects Transformer and Mamba's performance and optimization stability.

Destiny Okpekpe、Antonio Orvieto

计算技术、计算机技术

Destiny Okpekpe,Antonio Orvieto.When recalling in-context, Transformers are not SSMs[EB/OL].(2025-08-26)[2025-09-05].https://arxiv.org/abs/2508.19029.点此复制

评论