|国家预印本平台
首页|Mechanistic evaluation of Transformers and state space models

Mechanistic evaluation of Transformers and state space models

Mechanistic evaluation of Transformers and state space models

来源:Arxiv_logoArxiv
英文摘要

State space models (SSMs) for language modelling promise an efficient and performant alternative to quadratic-attention Transformers, yet show variable performance on recalling basic information from the context. While performance on synthetic tasks like Associative Recall (AR) can point to this deficiency, behavioural metrics provide little information as to why--on a mechanistic level--certain architectures fail and others succeed. To address this, we conduct experiments on AR and find that only Transformers and Based SSM models fully succeed at AR, with Mamba a close third, whereas the other SSMs (H3, Hyena) fail. We then use causal interventions to explain why. We find that Transformers and Based learn to store key-value associations in-context using induction heads. By contrast, the SSMs compute these associations only at the last state, with only Mamba succeeding because of its short convolution component. To extend and deepen these findings, we introduce Associative Treecall (ATR), a synthetic task similar to AR based on PCFG induction. ATR introduces language-like hierarchical structure into the AR setting. We find that all architectures learn the same mechanism as they did for AR, and the same three models succeed at the task. These results reveal that architectures with similar accuracy may still have substantive differences, motivating the adoption of mechanistic evaluations.

Aryaman Arora、Neil Rathi、Nikil Roashan Selvam、Róbert Csordás、Dan Jurafsky、Christopher Potts

计算技术、计算机技术

Aryaman Arora,Neil Rathi,Nikil Roashan Selvam,Róbert Csordás,Dan Jurafsky,Christopher Potts.Mechanistic evaluation of Transformers and state space models[EB/OL].(2025-05-21)[2025-07-16].https://arxiv.org/abs/2505.15105.点此复制

评论