|国家预印本平台
首页|Learning Advanced Self-Attention for Linear Transformers in the Singular Value Domain

Learning Advanced Self-Attention for Linear Transformers in the Singular Value Domain

Learning Advanced Self-Attention for Linear Transformers in the Singular Value Domain

来源:Arxiv_logoArxiv
英文摘要

Transformers have demonstrated remarkable performance across diverse domains. The key component of Transformers is self-attention, which learns the relationship between any two tokens in the input sequence. Recent studies have revealed that the self-attention can be understood as a normalized adjacency matrix of a graph. Notably, from the perspective of graph signal processing (GSP), the self-attention can be equivalently defined as a simple graph filter, applying GSP using the value vector as the signal. However, the self-attention is a graph filter defined with only the first order of the polynomial matrix, and acts as a low-pass filter preventing the effective leverage of various frequency information. Consequently, existing self-attention mechanisms are designed in a rather simplified manner. Therefore, we propose a novel method, called \underline{\textbf{A}}ttentive \underline{\textbf{G}}raph \underline{\textbf{F}}ilter (AGF), interpreting the self-attention as learning the graph filter in the singular value domain from the perspective of graph signal processing for directed graphs with the linear complexity w.r.t. the input length $n$, i.e., $\mathcal{O}(nd^2)$. In our experiments, we demonstrate that AGF achieves state-of-the-art performance on various tasks, including Long Range Arena benchmark and time series classification.

Hyowon Wi、Jeongwhan Choi、Noseong Park

计算技术、计算机技术

Hyowon Wi,Jeongwhan Choi,Noseong Park.Learning Advanced Self-Attention for Linear Transformers in the Singular Value Domain[EB/OL].(2025-05-13)[2025-06-14].https://arxiv.org/abs/2505.08516.点此复制

评论