|国家预印本平台
首页|Key, Value, Compress: A Systematic Exploration of KV Cache Compression Techniques

Key, Value, Compress: A Systematic Exploration of KV Cache Compression Techniques

Key, Value, Compress: A Systematic Exploration of KV Cache Compression Techniques

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) have demonstrated exceptional capabilities in generating text, images, and video content. However, as context length grows, the computational cost of attention increases quadratically with the number of tokens, presenting significant efficiency challenges. This paper presents an analysis of various Key-Value (KV) cache compression strategies, offering a comprehensive taxonomy that categorizes these methods by their underlying principles and implementation techniques. Furthermore, we evaluate their impact on performance and inference latency, providing critical insights into their effectiveness. Our findings highlight the trade-offs involved in KV cache compression and its influence on handling long-context scenarios, paving the way for more efficient LLM implementations.

Neusha Javidnia、Bita Darvish Rouhani、Farinaz Koushanfar

计算技术、计算机技术

Neusha Javidnia,Bita Darvish Rouhani,Farinaz Koushanfar.Key, Value, Compress: A Systematic Exploration of KV Cache Compression Techniques[EB/OL].(2025-03-14)[2025-07-19].https://arxiv.org/abs/2503.11816.点此复制

评论