|国家预印本平台
首页|Learning Spatial Decay for Vision Transformers

Learning Spatial Decay for Vision Transformers

Learning Spatial Decay for Vision Transformers

来源:Arxiv_logoArxiv
英文摘要

Vision Transformers (ViTs) have revolutionized computer vision, yet their self-attention mechanism lacks explicit spatial inductive biases, leading to suboptimal performance on spatially-structured tasks. Existing approaches introduce data-independent spatial decay based on fixed distance metrics, applying uniform attention weighting regardless of image content and limiting adaptability to diverse visual scenarios. Inspired by recent advances in large language models where content-aware gating mechanisms (e.g., GLA, HGRN2, FOX) significantly outperform static alternatives, we present the first successful adaptation of data-dependent spatial decay to 2D vision transformers. We introduce \textbf{Spatial Decay Transformer (SDT)}, featuring a novel Context-Aware Gating (CAG) mechanism that generates dynamic, data-dependent decay for patch interactions. Our approach learns to modulate spatial attention based on both content relevance and spatial proximity. We address the fundamental challenge of 1D-to-2D adaptation through a unified spatial-content fusion framework that integrates manhattan distance-based spatial priors with learned content representations. Extensive experiments on ImageNet-1K classification and generation tasks demonstrate consistent improvements over strong baselines. Our work establishes data-dependent spatial decay as a new paradigm for enhancing spatial attention in vision transformers.

Yuxin Mao、Zhen Qin、Jinxing Zhou、Bin Fan、Jing Zhang、Yiran Zhong、Yuchao Dai

计算技术、计算机技术

Yuxin Mao,Zhen Qin,Jinxing Zhou,Bin Fan,Jing Zhang,Yiran Zhong,Yuchao Dai.Learning Spatial Decay for Vision Transformers[EB/OL].(2025-08-13)[2025-08-24].https://arxiv.org/abs/2508.09525.点此复制

评论