HeteroSpec: Leveraging Contextual Heterogeneity for Efficient Speculative Decoding
HeteroSpec: Leveraging Contextual Heterogeneity for Efficient Speculative Decoding
Autoregressive decoding, the standard approach for Large Language Model (LLM) inference, remains a significant bottleneck due to its sequential nature. While speculative decoding algorithms mitigate this inefficiency through parallel verification, they fail to exploit the inherent heterogeneity in linguistic complexity, a key factor leading to suboptimal resource allocation. We address this by proposing HeteroSpec, a heterogeneity-adaptive speculative decoding framework that dynamically optimizes computational resource allocation based on linguistic context complexity. HeteroSpec introduces two key mechanisms: (1) A novel cumulative meta-path Top-$K$ entropy metric for efficiently identifying predictable contexts. (2) A dynamic resource allocation strategy based on data-driven entropy partitioning, enabling adaptive speculative expansion and pruning tailored to local context difficulty. Evaluated on five public benchmarks and four models, HeteroSpec achieves an average speedup of 4.26$\times$. It consistently outperforms state-of-the-art EAGLE-3 across speedup rates, average acceptance length, and verification cost. Notably, HeteroSpec requires no draft model retraining, incurs minimal overhead, and is orthogonal to other acceleration techniques. It demonstrates enhanced acceleration with stronger draft models, establishing a new paradigm for context-aware LLM inference acceleration.
Siran Liu、Yang Ye、Qianchao Zhu、Zheng Cao、Yongchao He
计算技术、计算机技术
Siran Liu,Yang Ye,Qianchao Zhu,Zheng Cao,Yongchao He.HeteroSpec: Leveraging Contextual Heterogeneity for Efficient Speculative Decoding[EB/OL].(2025-05-19)[2025-06-28].https://arxiv.org/abs/2505.13254.点此复制
评论