|国家预印本平台
首页|Pruning Long Chain-of-Thought of Large Reasoning Models via Small-Scale Preference Optimization

Pruning Long Chain-of-Thought of Large Reasoning Models via Small-Scale Preference Optimization

Pruning Long Chain-of-Thought of Large Reasoning Models via Small-Scale Preference Optimization

来源:Arxiv_logoArxiv
英文摘要

Recent advances in Large Reasoning Models (LRMs) have demonstrated strong performance on complex tasks through long Chain-of-Thought (CoT) reasoning. However, their lengthy outputs increase computational costs and may lead to overthinking, raising challenges in balancing reasoning effectiveness and efficiency. Current methods for efficient reasoning often compromise reasoning quality or require extensive resources. This paper investigates efficient methods to reduce the generation length of LRMs. We analyze generation path distributions and filter generated trajectories through difficulty estimation. Subsequently, we analyze the convergence behaviors of the objectives of various preference optimization methods under a Bradley-Terry loss based framework. Based on the analysis, we propose Length Controlled Preference Optimization (LCPO) that directly balances the implicit reward related to NLL loss. LCPO can effectively learn length preference with limited data and training. Extensive experiments demonstrate that our approach significantly reduces the average output length by over 50\% across multiple benchmarks while maintaining the reasoning performance. Our work highlights the potential for computationally efficient approaches in guiding LRMs toward efficient reasoning.

Bin Hong、Jiayu Liu、Zhenya Huang、Kai Zhang、Mengdi Zhang

计算技术、计算机技术

Bin Hong,Jiayu Liu,Zhenya Huang,Kai Zhang,Mengdi Zhang.Pruning Long Chain-of-Thought of Large Reasoning Models via Small-Scale Preference Optimization[EB/OL].(2025-08-13)[2025-08-24].https://arxiv.org/abs/2508.10164.点此复制

评论