Latent Multi-Head Attention for Small Language Models
Latent Multi-Head Attention for Small Language Models
We present the first comprehensive study of latent multi-head attention (MLA) for small language models, revealing interesting efficiency-quality trade-offs. Training 30M-parameter GPT models on 100,000 synthetic stories, we benchmark three architectural variants: standard multi-head attention (MHA), MLA, and MLA with rotary positional embeddings (MLA+RoPE). Our key finding is that MLA+RoPE with half-rank latent dimensions (r = d/2) achieves a 45% KV-cache memory reduction while incurring only a 0.3% increase in validation loss (essentially matching MHA quality)- a Pareto improvement for memory constrained deployment. We further show that RoPE is crucial for MLA in small models: without it, MLA underperforms vanilla attention by 3-5%, but with RoPE, it surpasses vanilla by 2%. Inference benchmarks on NVIDIA A100 GPUs reveal that MLA with r=d/2 achieves a 1.4 times speedup over full-rank MLA while maintaining the memory savings. GPT-4 evaluations corroborate perplexity results, with ours achieving the highest quality scores (7.4/10) across grammar, creativity, and consistency metrics. Code and models will be released upon acceptance.
Sushant Mehta、Raj Dandekar、Rajat Dandekar、Sreedath Panat
计算技术、计算机技术
Sushant Mehta,Raj Dandekar,Rajat Dandekar,Sreedath Panat.Latent Multi-Head Attention for Small Language Models[EB/OL].(2025-06-10)[2025-08-02].https://arxiv.org/abs/2506.09342.点此复制
评论