|国家预印本平台
首页|Multi-head Temporal Latent Attention

Multi-head Temporal Latent Attention

Multi-head Temporal Latent Attention

来源:Arxiv_logoArxiv
英文摘要

While Transformer self-attention offers strong parallelism, the Key-Value (KV) cache grows linearly with sequence length and becomes a bottleneck for inference efficiency. Multi-head latent attention was recently developed to compress the KV cache into a low-rank latent space. This paper proposes Multi-head Temporal Latent Attention (MTLA), which further reduces the KV cache size along the temporal dimension, greatly lowering the memory footprint of self-attention inference. MTLA employs a hyper-network to dynamically merge temporally adjacent KV cache vectors. To address the mismatch between the compressed KV cache and processed sequence lengths, a stride-aware causal mask is proposed to ensure efficient parallel training and consistency with inference behaviour. Experiments across tasks, including speech translation, speech recognition, speech understanding and text summarisation, demonstrate that MTLA achieves competitive performance compared to standard Multi-Head Attention (MHA), while greatly improving inference speed and GPU memory usage. For example, on a English-German speech translation task, MTLA achieves a 5.3x speedup and a reduction in GPU memory usage by a factor of 8.3 compared to MHA, while maintaining translation quality.

Keqi Deng、Philip C. Woodland

计算技术、计算机技术

Keqi Deng,Philip C. Woodland.Multi-head Temporal Latent Attention[EB/OL].(2025-05-18)[2025-06-06].https://arxiv.org/abs/2505.13544.点此复制

评论