|国家预印本平台
首页|Confidence-Modulated Speculative Decoding for Large Language Models

Confidence-Modulated Speculative Decoding for Large Language Models

Confidence-Modulated Speculative Decoding for Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Speculative decoding has emerged as an effective approach for accelerating autoregressive inference by parallelizing token generation through a draft-then-verify paradigm. However, existing methods rely on static drafting lengths and rigid verification criteria, limiting their adaptability across varying model uncertainties and input complexities. This paper proposes an information-theoretic framework for speculative decoding based on confidence-modulated drafting. By leveraging entropy and margin-based uncertainty measures over the drafter's output distribution, the proposed method dynamically adjusts the number of speculatively generated tokens at each iteration. This adaptive mechanism reduces rollback frequency, improves resource utilization, and maintains output fidelity. Additionally, the verification process is modulated using the same confidence signals, enabling more flexible acceptance of drafted tokens without sacrificing generation quality. Experiments on machine translation and summarization tasks demonstrate significant speedups over standard speculative decoding while preserving or improving BLEU and ROUGE scores. The proposed approach offers a principled, plug-in method for efficient and robust decoding in large language models under varying conditions of uncertainty.

Jaydip Sen、Subhasis Dasgupta、Hetvi Waghela

计算技术、计算机技术

Jaydip Sen,Subhasis Dasgupta,Hetvi Waghela.Confidence-Modulated Speculative Decoding for Large Language Models[EB/OL].(2025-08-21)[2025-09-02].https://arxiv.org/abs/2508.15371.点此复制

评论