|国家预印本平台
首页|Efficient Reasoning Through Suppression of Self-Affirmation Reflections in Large Reasoning Models

Efficient Reasoning Through Suppression of Self-Affirmation Reflections in Large Reasoning Models

Efficient Reasoning Through Suppression of Self-Affirmation Reflections in Large Reasoning Models

来源:Arxiv_logoArxiv
英文摘要

While recent advances in large reasoning models have demonstrated remarkable performance, efficient reasoning remains critical due to the rapid growth of output length. Existing optimization approaches highlights a tendency toward "overthinking", yet lack fine-grained analysis. In this work, we focus on Self-Affirmation Reflections: redundant reflective steps that affirm prior content and often occurs after the already correct reasoning steps. Observations of both original and optimized reasoning models reveal pervasive self-affirmation reflections. Notably, these reflections sometimes lead to longer outputs in optimized models than their original counterparts. Through detailed analysis, we uncover an intriguing pattern: compared to other reflections, the leading words (i.e., the first word of sentences) in self-affirmation reflections exhibit a distinct probability bias. Motivated by this insight, we can locate self-affirmation reflections and conduct a train-free experiment demonstrating that suppressing self-affirmation reflections reduces output length without degrading accuracy across multiple models (R1-Distill-Models, QwQ-32B, and Qwen3-32B). Furthermore, we also improve current train-based method by explicitly suppressing such reflections. In our experiments, we achieve length compression of 18.7\% in train-free settings and 50.2\% in train-based settings for R1-Distill-Qwen-1.5B. Moreover, our improvements are simple yet practical and can be directly applied to existing inference frameworks, such as vLLM. We believe that our findings will provide community insights for achieving more precise length compression and step-level efficient reasoning.

Kaiyuan Liu、Chen Shen、Zhanwei Zhang、Junjie Liu、Xiaosong Yuan、Jieping ye

计算技术、计算机技术

Kaiyuan Liu,Chen Shen,Zhanwei Zhang,Junjie Liu,Xiaosong Yuan,Jieping ye.Efficient Reasoning Through Suppression of Self-Affirmation Reflections in Large Reasoning Models[EB/OL].(2025-06-14)[2025-06-25].https://arxiv.org/abs/2506.12353.点此复制

评论