|国家预印本平台
首页|Demystifying Singular Defects in Large Language Models

Demystifying Singular Defects in Large Language Models

Demystifying Singular Defects in Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Large transformer models are known to produce high-norm tokens. In vision transformers (ViTs), such tokens have been mathematically modeled through the singular vectors of the linear approximations of layers. However, in large language models (LLMs), the underlying causes of high-norm tokens remain largely unexplored, and their different properties from those of ViTs require a new analysis framework. In this paper, we provide both theoretical insights and empirical validation across a range of recent models, leading to the following observations: i) The layer-wise singular direction predicts the abrupt explosion of token norms in LLMs. ii) The negative eigenvalues of a layer explain its sudden decay. iii) The computational pathways leading to high-norm tokens differ between initial and noninitial tokens. iv) High-norm tokens are triggered by the right leading singular vector of the matrix approximating the corresponding modules. We showcase two practical applications of these findings: the improvement of quantization schemes and the design of LLM signatures. Our findings not only advance the understanding of singular defects in LLMs but also open new avenues for their application. We expect that this work will stimulate further research into the internal mechanisms of LLMs. Code is released at https://github.com/haoqiwang/singular_defect.

Haoqi Wang、Tong Zhang、Mathieu Salzmann

计算技术、计算机技术

Haoqi Wang,Tong Zhang,Mathieu Salzmann.Demystifying Singular Defects in Large Language Models[EB/OL].(2025-06-27)[2025-07-16].https://arxiv.org/abs/2502.07004.点此复制

评论