|国家预印本平台
首页|Time Is a Feature: Exploiting Temporal Dynamics in Diffusion Language Models

Time Is a Feature: Exploiting Temporal Dynamics in Diffusion Language Models

Time Is a Feature: Exploiting Temporal Dynamics in Diffusion Language Models

来源:Arxiv_logoArxiv
英文摘要

Diffusion large language models (dLLMs) generate text through iterative denoising, yet current decoding strategies discard rich intermediate predictions in favor of the final output. Our work here reveals a critical phenomenon, temporal oscillation, where correct answers often emerge in the middle process, but are overwritten in later denoising steps. To address this issue, we introduce two complementary methods that exploit temporal consistency: 1) Temporal Self-Consistency Voting, a training-free, test-time decoding strategy that aggregates predictions across denoising steps to select the most consistent output; and 2) a post-training method termed Temporal Consistency Reinforcement, which uses Temporal Semantic Entropy (TSE), a measure of semantic stability across intermediate predictions, as a reward signal to encourage stable generations. Empirical results across multiple benchmarks demonstrate the effectiveness of our approach. Using the negative TSE reward alone, we observe a remarkable average improvement of 24.7% on the Countdown dataset over an existing dLLM. Combined with the accuracy reward, we achieve absolute gains of 2.0% on GSM8K, 4.3% on MATH500, 6.6% on SVAMP, and 25.3% on Countdown, respectively. Our findings underscore the untapped potential of temporal dynamics in dLLMs and offer two simple yet effective tools to harness them.

Wen Wang、Bozhen Fang、Chenchen Jing、Yongliang Shen、Yangyi Shen、Qiuyu Wang、Hao Ouyang、Hao Chen、Chunhua Shen

计算技术、计算机技术

Wen Wang,Bozhen Fang,Chenchen Jing,Yongliang Shen,Yangyi Shen,Qiuyu Wang,Hao Ouyang,Hao Chen,Chunhua Shen.Time Is a Feature: Exploiting Temporal Dynamics in Diffusion Language Models[EB/OL].(2025-08-12)[2025-08-24].https://arxiv.org/abs/2508.09138.点此复制

评论