|国家预印本平台
首页|HAMburger: Accelerating LLM Inference via Token Smashing

HAMburger: Accelerating LLM Inference via Token Smashing

HAMburger: Accelerating LLM Inference via Token Smashing

来源:Arxiv_logoArxiv
英文摘要

The growing demand for efficient Large Language Model (LLM) inference requires a holistic optimization on algorithms, systems, and hardware. However, very few works have fundamentally changed the generation pattern: each token needs one forward pass and one KV cache. This can be sub-optimal because we found that LLMs are extremely capable of self-identifying the exact dose of information that a single KV cache can store, and many tokens can be generated confidently without global context. Based on this insight, we introduce HAMburger, a Hierarchically Auto-regressive Model that redefines resource allocation in LLMs by moving beyond uniform computation and storage per token during inference. Stacking a compositional embedder and a micro-step decoder in between a base LLM, HAMburger smashes multiple tokens into a single KV and generates several tokens per step. Additionally, HAMburger functions as a speculative decoding framework where it can blindly trust self-drafted tokens. As a result, HAMburger shifts the growth of KV cache and forward FLOPs from linear to sub-linear with respect to output length, and adjusts its inference speed based on query perplexity and output structure. Extensive evaluations show that HAMburger reduces the KV cache computation by up to 2$\times$ and achieves up to 2$\times$ TPS, while maintaining quality in both short- and long-context tasks. Our method explores an extremely challenging inference regime that requires both computation- and memory-efficiency with a hardware-agnostic design.

Jingyu Liu、Ce Zhang

计算技术、计算机技术

Jingyu Liu,Ce Zhang.HAMburger: Accelerating LLM Inference via Token Smashing[EB/OL].(2025-05-26)[2025-06-27].https://arxiv.org/abs/2505.20438.点此复制

评论