Blending Supervised and Reinforcement Fine-Tuning with Prefix Sampling
Blending Supervised and Reinforcement Fine-Tuning with Prefix Sampling
Existing post-training techniques for large language models are broadly categorized into Supervised Fine-Tuning (SFT) and Reinforcement Fine-Tuning (RFT). Each paradigm presents a distinct trade-off: SFT excels at mimicking demonstration data but can lead to problematic generalization as a form of behavior cloning. Conversely, RFT can significantly enhance a model's performance but is prone to learn unexpected behaviors, and its performance is highly sensitive to the initial policy. In this paper, we propose a unified view of these methods and introduce Prefix-RFT, a hybrid approach that synergizes learning from both demonstration and exploration. Using mathematical reasoning problems as a testbed, we empirically demonstrate that Prefix-RFT is both simple and effective. It not only surpasses the performance of standalone SFT and RFT but also outperforms parallel mixed-policy RFT methods. A key advantage is its seamless integration into existing open-source frameworks, requiring only minimal modifications to the standard RFT pipeline. Our analysis highlights the complementary nature of SFT and RFT, and validates that Prefix-RFT effectively harmonizes these two learning paradigms. Furthermore, ablation studies confirm the method's robustness to variations in the quality and quantity of demonstration data. We hope this work offers a new perspective on LLM post-training, suggesting that a unified paradigm that judiciously integrates demonstration and exploration could be a promising direction for future research.
Zeyu Huang、Tianhao Cheng、Zihan Qiu、Zili Wang、Yinghui Xu、Edoardo M. Ponti、Ivan Titov
计算技术、计算机技术
Zeyu Huang,Tianhao Cheng,Zihan Qiu,Zili Wang,Yinghui Xu,Edoardo M. Ponti,Ivan Titov.Blending Supervised and Reinforcement Fine-Tuning with Prefix Sampling[EB/OL].(2025-07-02)[2025-07-16].https://arxiv.org/abs/2507.01679.点此复制
评论