|国家预印本平台
首页|Fused3S: Fast Sparse Attention on Tensor Cores

Fused3S: Fast Sparse Attention on Tensor Cores

Fused3S: Fast Sparse Attention on Tensor Cores

来源:Arxiv_logoArxiv
英文摘要

Sparse attention is a core building block in many leading neural network models, from graph-structured learning to sparse sequence modeling. It can be decomposed into a sequence of three sparse matrix operations (3S): sampled dense-dense matrix multiplication (SDDMM), softmax normalization, and sparse matrix multiplication (SpMM). Efficiently executing the 3S computational pattern on modern GPUs remains challenging due to (a) the mismatch between unstructured sparsity and tensor cores optimized for dense operations, and (b) the high cost of data movement. Previous works have optimized these sparse operations individually or addressed one of these challenges. This paper introduces Fused3S, the first fused 3S algorithm that jointly maximizes tensor core utilization and minimizes data movement. Across real-world graph datasets, Fused3S achieves $1.6- 16.3\times$ and $1.5-14\times$ speedup over state-of-the-art on H100 and A30 GPUs. Furthermore, integrating Fused3S into Graph Transformer inference accelerates end-to-end performance by $1.05-5.36\times$, consistently outperforming all 3S baselines across diverse datasets (single and batched graphs) and GPU architectures.

Zitong Li、Aparna Chandramowlishwaran

计算技术、计算机技术

Zitong Li,Aparna Chandramowlishwaran.Fused3S: Fast Sparse Attention on Tensor Cores[EB/OL].(2025-05-12)[2025-06-09].https://arxiv.org/abs/2505.08098.点此复制

评论