|国家预印本平台
首页|GTA: Grouped-head latenT Attention

GTA: Grouped-head latenT Attention

GTA: Grouped-head latenT Attention

来源:Arxiv_logoArxiv
英文摘要

Attention mechanisms underpin the success of large language models (LLMs), yet their substantial computational and memory overhead poses challenges for optimizing efficiency and performance. A critical bottleneck arises as KV cache and attention computations scale rapidly with text length, challenging deployment on hardware with limited computational and memory resources. We observe that attention mechanisms exhibit substantial redundancy, since the KV cache can be significantly compressed and attention maps across heads display high similarity, revealing that much of the computation and storage is unnecessary. Leveraging these insights, we propose \textbf{G}rouped-Head Laten\textbf{T} \textbf{A}ttention (GTA), a novel attention mechanism that reduces memory usage and computational complexity while maintaining performance. GTA comprises two components: (1) a shared attention map mechanism that reuses attention scores across multiple heads, decreasing the key cache size; and (2) a nonlinear value decoder with learned projections that compresses the value cache into a latent space, further cutting memory needs. GTA cuts attention computation FLOPs by up to \emph{62.5\%} versus Grouped-Query Attention and shrink the KV cache by up to \emph{70\%}, all while avoiding the extra overhead of Multi-Head Latent Attention to improve LLM deployment efficiency. Consequently, GTA models achieve a \emph{2x} increase in end-to-end inference speed, with prefill benefiting from reduced computational cost and decoding benefiting from the smaller cache footprint.

Luoyang Sun、Jiwen Jiang、Cheng Deng、Xinjian Wu、Haifeng Zhang、Lei Chen、Lionel Ni、Jun Wang

计算技术、计算机技术

Luoyang Sun,Jiwen Jiang,Cheng Deng,Xinjian Wu,Haifeng Zhang,Lei Chen,Lionel Ni,Jun Wang.GTA: Grouped-head latenT Attention[EB/OL].(2025-06-15)[2025-07-02].https://arxiv.org/abs/2506.17286.点此复制

评论