Titanus: Enabling KV Cache Pruning and Quantization On-the-Fly for LLM Acceleration
Titanus: Enabling KV Cache Pruning and Quantization On-the-Fly for LLM Acceleration
Large language models (LLMs) have gained great success in various domains. Existing systems cache Key and Value within the attention block to avoid redundant computations. However, the size of key-value cache (KV cache) is unpredictable and can even be tens of times larger than the weights in the long context length scenario. In this work, we propose Titanus, a software-hardware co-design to efficiently compress the KV cache on-the-fly. We first propose the cascade pruning-quantization (CPQ) method to reduce the KV cache movement. The hierarchical quantization extension strategy is introduced to tackle the non-independent per-channel quantization issue. To further reduce KV cache movement, we transfer only the non-zero KV cache between the accelerator and off-chip memory. Moreover, we customize a two-stage design space exploration framework for the CPQ method. A novel pipeline and parallelism dataflow is designed to reduce the first token generation time. Experiments show that Titanus achieves 159.9x (49.6x) and 34.8x (29.2x) energy efficiency (throughput) compared to Nvidia A100 GPU and FlightLLM respectively. The code for Titanus is available at https://github.com/peilin-chen/Titanus-for-LLM-acceleration.
Peilin Chen、Xiaoxuan Yang
计算技术、计算机技术
Peilin Chen,Xiaoxuan Yang.Titanus: Enabling KV Cache Pruning and Quantization On-the-Fly for LLM Acceleration[EB/OL].(2025-05-23)[2025-06-06].https://arxiv.org/abs/2505.17787.点此复制
评论