|国家预印本平台
首页|Learning to Skip the Middle Layers of Transformers

Learning to Skip the Middle Layers of Transformers

Learning to Skip the Middle Layers of Transformers

来源:Arxiv_logoArxiv
英文摘要

Conditional computation is a popular strategy to make Transformers more efficient. Existing methods often target individual modules (e.g., mixture-of-experts layers) or skip layers independently of one another. However, interpretability research has demonstrated that the middle layers of Transformers exhibit greater redundancy, and that early layers aggregate information into token positions. Guided by these insights, we propose a novel architecture that dynamically skips a variable number of layers from the middle outward. In particular, a learned gating mechanism determines whether to bypass a symmetric span of central blocks based on the input, and a gated attention mechanism prevents subsequent tokens from attending to skipped token positions. Residual norms are controlled with a 'sandwich' or 'perilayernorm' scheme and gate sparsity with an adaptive regularization loss. We had aimed to reduce compute requirements for 'simpler' tokens and potentially foster an emergent multi-level representational hierarchy but, at the scales investigated, our approach does not achieve improvements in the trade-off between validation cross-entropy and estimated FLOPs compared to dense baselines with fewer layers. We release our code at https://github.com/tim-lawson/skip-middle.

Tim Lawson、Laurence Aitchison

计算技术、计算机技术

Tim Lawson,Laurence Aitchison.Learning to Skip the Middle Layers of Transformers[EB/OL].(2025-06-26)[2025-07-09].https://arxiv.org/abs/2506.21103.点此复制

评论