|国家预印本平台
首页|Transformers are Graph Neural Networks

Transformers are Graph Neural Networks

Transformers are Graph Neural Networks

来源:Arxiv_logoArxiv
英文摘要

We establish connections between the Transformer architecture, originally introduced for natural language processing, and Graph Neural Networks (GNNs) for representation learning on graphs. We show how Transformers can be viewed as message passing GNNs operating on fully connected graphs of tokens, where the self-attention mechanism capture the relative importance of all tokens w.r.t. each-other, and positional encodings provide hints about sequential ordering or structure. Thus, Transformers are expressive set processing networks that learn relationships among input elements without being constrained by apriori graphs. Despite this mathematical connection to GNNs, Transformers are implemented via dense matrix operations that are significantly more efficient on modern hardware than sparse message passing. This leads to the perspective that Transformers are GNNs currently winning the hardware lottery.

Chaitanya K. Joshi

计算技术、计算机技术

Chaitanya K. Joshi.Transformers are Graph Neural Networks[EB/OL].(2025-06-27)[2025-07-16].https://arxiv.org/abs/2506.22084.点此复制

评论