|国家预印本平台
首页|Accelerating Diffusion Large Language Models with SlowFast Sampling: The Three Golden Principles

Accelerating Diffusion Large Language Models with SlowFast Sampling: The Three Golden Principles

Accelerating Diffusion Large Language Models with SlowFast Sampling: The Three Golden Principles

来源:Arxiv_logoArxiv
英文摘要

Diffusion-based language models (dLLMs) have emerged as a promising alternative to traditional autoregressive LLMs by enabling parallel token generation and significantly reducing inference latency. However, existing sampling strategies for dLLMs, such as confidence-based or semi-autoregressive decoding, often suffer from static behavior, leading to suboptimal efficiency and limited flexibility. In this paper, we propose SlowFast Sampling, a novel dynamic sampling strategy that adaptively alternates between exploratory and accelerated decoding stages. Our method is guided by three golden principles: certainty principle, convergence principle, and positional principle, which govern when and where tokens can be confidently and efficiently decoded. We further integrate our strategy with dLLM-Cache to reduce redundant computation. Extensive experiments across benchmarks and models show that SlowFast Sampling achieves up to 15.63$\times$ speedup on LLaDA with minimal accuracy drop, and up to 34.22$\times$ when combined with caching. Notably, our approach outperforms strong autoregressive baselines like LLaMA3 8B in throughput, demonstrating that well-designed sampling can unlock the full potential of dLLMs for fast and high-quality generation.

Qingyan Wei、Yaojie Zhang、Zhiyuan Liu、Dongrui Liu、Linfeng Zhang

计算技术、计算机技术

Qingyan Wei,Yaojie Zhang,Zhiyuan Liu,Dongrui Liu,Linfeng Zhang.Accelerating Diffusion Large Language Models with SlowFast Sampling: The Three Golden Principles[EB/OL].(2025-06-12)[2025-07-16].https://arxiv.org/abs/2506.10848.点此复制

评论