|国家预印本平台
首页|SuperRL: Reinforcement Learning with Supervision to Boost Language Model Reasoning

SuperRL: Reinforcement Learning with Supervision to Boost Language Model Reasoning

SuperRL: Reinforcement Learning with Supervision to Boost Language Model Reasoning

来源:Arxiv_logoArxiv
英文摘要

Large language models are increasingly used for complex reasoning tasks where high-quality offline data such as expert-annotated solutions and distilled reasoning traces are often available. However, in environments with sparse rewards, reinforcement learning struggles to sample successful trajectories, leading to inefficient learning. At the same time, these offline trajectories that represent correct reasoning paths are not utilized by standard on-policy reinforcement learning methods. To address this limitation, we propose SuperRL, a unified training framework that adaptively incorporates offline supervision into reinforcement learning. SuperRL introduces an Adaptive Switch to detect sparse reward conditions and activates a Hybrid Actor when necessary. The Hybrid Actor integrates policy gradient and supervised learning objectives at the loss level, enabling the model to benefit from accurate offline reasoning signals while maintaining the exploratory capacity of reinforcement learning. Experiments on a range of reasoning benchmarks show that SuperRL consistently outperforms standard reinforcement learning by improving sample efficiency, generalization, and robustness under sparse rewards.

Yihao Liu、Shuocheng Li、Lang Cao、Yuhang Xie、Mengyu Zhou、Haoyu Dong、Xiaojun Ma、Shi Han、Dongmei Zhang

计算技术、计算机技术

Yihao Liu,Shuocheng Li,Lang Cao,Yuhang Xie,Mengyu Zhou,Haoyu Dong,Xiaojun Ma,Shi Han,Dongmei Zhang.SuperRL: Reinforcement Learning with Supervision to Boost Language Model Reasoning[EB/OL].(2025-06-01)[2025-07-16].https://arxiv.org/abs/2506.01096.点此复制

评论