|国家预印本平台
首页|Artifacts and Attention Sinks: Structured Approximations for Efficient Vision Transformers

Artifacts and Attention Sinks: Structured Approximations for Efficient Vision Transformers

Artifacts and Attention Sinks: Structured Approximations for Efficient Vision Transformers

来源:Arxiv_logoArxiv
英文摘要

Vision transformers have emerged as a powerful tool across a wide range of applications, yet their inner workings remain only partially understood. In this work, we examine the phenomenon of massive tokens - tokens with exceptionally high activation norms that act as attention sinks - and artifact tokens that emerge as a byproduct during inference. Our analysis reveals that these tokens mutually suppress one another through the attention mechanism, playing a critical role in regulating information flow within the network. Leveraging these insights, we introduce Fast Nyström Attention (FNA), a training-free method that approximates self-attention in linear time and space by exploiting the structured patterns formed by massive and artifact tokens. Additionally, we propose a masking strategy to mitigate noise from these tokens, yielding modest performance gains at virtually no cost. We evaluate our approach on popular pretrained vision backbones and demonstrate competitive performance on retrieval, classification, segmentation, and visual question answering (VQA), all while reducing computational overhead.

Andrew Lu、Wentinn Liao、Liuhui Wang、Huzheng Yang、Jianbo Shi

计算技术、计算机技术

Andrew Lu,Wentinn Liao,Liuhui Wang,Huzheng Yang,Jianbo Shi.Artifacts and Attention Sinks: Structured Approximations for Efficient Vision Transformers[EB/OL].(2025-07-21)[2025-08-10].https://arxiv.org/abs/2507.16018.点此复制

评论