|国家预印本平台
首页|SRFT: A Single-Stage Method with Supervised and Reinforcement Fine-Tuning for Reasoning

SRFT: A Single-Stage Method with Supervised and Reinforcement Fine-Tuning for Reasoning

SRFT: A Single-Stage Method with Supervised and Reinforcement Fine-Tuning for Reasoning

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) have achieved remarkable progress in reasoning tasks, yet the optimal integration of Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) remains a fundamental challenge. Through comprehensive analysis of token distributions, learning dynamics, and integration mechanisms from entropy-based perspectives, we reveal key differences between these paradigms: SFT induces coarse-grained global changes to LLM policy distributions, while RL performs fine-grained selective optimizations, with entropy serving as a critical indicator of training effectiveness. Building on these observations, we propose Supervised Reinforcement Fine-Tuning (SRFT), a single-stage method that unifies both fine-tuning paradigms through entropy-aware weighting mechanisms. Our approach simultaneously applies SFT and RL to directly optimize the LLM using demonstrations and self-exploration rollouts rather than through two-stage sequential methods. Extensive experiments show that SRFT achieves 59.1% average accuracy, outperforming zero-RL methods by 9.0% on five mathematical reasoning benchmarks and 10.9% on three out-of-distribution benchmarks.

Yuqian Fu、Tinghong Chen、Jiajun Chai、Xihuai Wang、Songjun Tu、Guojun Yin、Wei Lin、Qichao Zhang、Yuanheng Zhu、Dongbin Zhao

计算技术、计算机技术

Yuqian Fu,Tinghong Chen,Jiajun Chai,Xihuai Wang,Songjun Tu,Guojun Yin,Wei Lin,Qichao Zhang,Yuanheng Zhu,Dongbin Zhao.SRFT: A Single-Stage Method with Supervised and Reinforcement Fine-Tuning for Reasoning[EB/OL].(2025-06-24)[2025-07-18].https://arxiv.org/abs/2506.19767.点此复制

评论