Wait, We Don't Need to "Wait"! Removing Thinking Tokens Improves Reasoning Efficiency
Wait, We Don't Need to "Wait"! Removing Thinking Tokens Improves Reasoning Efficiency
Recent advances in large reasoning models have enabled complex, step-by-step reasoning but often introduce significant overthinking, resulting in verbose and redundant outputs that hinder efficiency. In this study, we examine whether explicit self-reflection, signaled by tokens such as "Wait" and "Hmm", is necessary for advanced reasoning. We propose NoWait, a simple yet effective approach that disables explicit self-reflection by suppressing these tokens during inference. Extensive experiments on ten benchmarks across textual, visual, and video reasoning tasks show that NoWait reduces chain-of-thought trajectory length by up to 27%-51% in five R1-style model series, without compromising model utility. NoWait thus offers a plug-and-play solution for efficient and utility-preserving multimodal reasoning.
Chenlong Wang、Yuanning Feng、Dongping Chen、Zhaoyang Chu、Ranjay Krishna、Tianyi Zhou
计算技术、计算机技术
Chenlong Wang,Yuanning Feng,Dongping Chen,Zhaoyang Chu,Ranjay Krishna,Tianyi Zhou.Wait, We Don't Need to "Wait"! Removing Thinking Tokens Improves Reasoning Efficiency[EB/OL].(2025-06-09)[2025-06-29].https://arxiv.org/abs/2506.08343.点此复制
评论