|国家预印本平台
| 注册
首页|Attention Layers Add Into Low-Dimensional Residual Subspaces

Attention Layers Add Into Low-Dimensional Residual Subspaces

Attention Layers Add Into Low-Dimensional Residual Subspaces

来源:Arxiv_logoArxiv
英文摘要

While transformer models are widely believed to operate in high-dimensional hidden spaces, we show that attention outputs are confined to a surprisingly low-dimensional subspace, where about 60\% of the directions account for 99\% of the variance--a phenomenon that is induced by the attention output projection matrix and consistently observed across diverse model families and datasets. Critically, we find this low-rank structure as a fundamental cause of the prevalent dead feature problem in sparse dictionary learning, where it creates a mismatch between randomly initialized features and the intrinsic geometry of the activation space. Building on this insight, we propose a subspace-constrained training method for sparse autoencoders (SAEs), initializing feature directions into the active subspace of activations. Our approach reduces dead features from 87\% to below 1\% in Attention Output SAEs with 1M features, and can further extend to other sparse dictionary learning methods. Our findings provide both new insights into the geometry of attention and practical tools for improving sparse dictionary learning in large language models.

Junxuan Wang、Xuyang Ge、Wentao Shu、Zhengfu He、Xipeng Qiu

计算技术、计算机技术

Junxuan Wang,Xuyang Ge,Wentao Shu,Zhengfu He,Xipeng Qiu.Attention Layers Add Into Low-Dimensional Residual Subspaces[EB/OL].(2025-08-23)[2025-09-06].https://arxiv.org/abs/2508.16929.点此复制

评论