|国家预印本平台
首页|Tactic: Adaptive Sparse Attention with Clustering and Distribution Fitting for Long-Context LLMs

Tactic: Adaptive Sparse Attention with Clustering and Distribution Fitting for Long-Context LLMs

Tactic: Adaptive Sparse Attention with Clustering and Distribution Fitting for Long-Context LLMs

来源:Arxiv_logoArxiv
英文摘要

Long-context models are essential for many applications but face inefficiencies in loading large KV caches during decoding. Prior methods enforce fixed token budgets for sparse attention, assuming a set number of tokens can approximate full attention. However, these methods overlook variations in the importance of attention across heads, layers, and contexts. To address these limitations, we propose Tactic, a sparsity-adaptive and calibration-free sparse attention mechanism that dynamically selects tokens based on their cumulative attention scores rather than a fixed token budget. By setting a target fraction of total attention scores, Tactic ensures that token selection naturally adapts to variations in attention sparsity. To efficiently approximate this selection, Tactic leverages clustering-based sorting and distribution fitting, allowing it to accurately estimate token importance with minimal computational overhead. We show that Tactic outperforms existing sparse attention algorithms, achieving superior accuracy and up to 7.29x decode attention speedup. This improvement translates to an overall 1.58x end-to-end inference speedup, making Tactic a practical and effective solution for long-context LLM inference in accuracy-sensitive applications.

Tian Tang、Arvind Krishnamurthy、Zhichen Zeng、Yile Gu、Kan Zhu、Liangyu Zhao、Qinyu Xu、Baris Kasikci、Rohan Kadekodi、Ang Li

计算技术、计算机技术

Tian Tang,Arvind Krishnamurthy,Zhichen Zeng,Yile Gu,Kan Zhu,Liangyu Zhao,Qinyu Xu,Baris Kasikci,Rohan Kadekodi,Ang Li.Tactic: Adaptive Sparse Attention with Clustering and Distribution Fitting for Long-Context LLMs[EB/OL].(2025-02-17)[2025-08-03].https://arxiv.org/abs/2502.12216.点此复制

评论