Draft Model Knows When to Stop: Self-Verification Speculative Decoding for Long-Form Generation
Draft Model Knows When to Stop: Self-Verification Speculative Decoding for Long-Form Generation
Conventional speculative decoding (SD) methods utilize a predefined length policy for proposing drafts, which implies the premise that the target model smoothly accepts the proposed draft tokens. However, reality deviates from this assumption: the oracle draft length varies significantly, and the fixed-length policy hardly satisfies such a requirement. Moreover, such discrepancy is further exacerbated in scenarios involving complex reasoning and long-form generation, particularly under test-time scaling for reasoning-specialized models. Through both theoretical and empirical estimation, we establish that the discrepancy between the draft and target models can be approximated by the draft model's prediction entropy: a high entropy indicates a low acceptance rate of draft tokens, and vice versa. Based on this insight, we propose SVIP: Self-Verification Length Policy for Long-Context Speculative Decoding, which is a training-free dynamic length policy for speculative decoding systems that adaptively determines the lengths of draft sequences by referring to the draft entropy. Experimental results on mainstream SD benchmarks as well as reasoning-heavy benchmarks demonstrate the superior performance of SVIP, achieving up to 17% speedup on MT-Bench at 8K context compared with fixed draft lengths, and 22% speedup for QwQ in long-form reasoning.
Zhaopeng Tu、Xingyu Chen、Ziyin Zhang、Jiahao Xu、Rui Wang、Tian Liang、Zhiwei He
计算技术、计算机技术
Zhaopeng Tu,Xingyu Chen,Ziyin Zhang,Jiahao Xu,Rui Wang,Tian Liang,Zhiwei He.Draft Model Knows When to Stop: Self-Verification Speculative Decoding for Long-Form Generation[EB/OL].(2025-08-24)[2025-09-06].https://arxiv.org/abs/2411.18462.点此复制
评论