SHARP: Synthesizing High-quality Aligned Reasoning Problems for Large Reasoning Models Reinforcement Learning
SHARP: Synthesizing High-quality Aligned Reasoning Problems for Large Reasoning Models Reinforcement Learning
Training large reasoning models (LRMs) with reinforcement learning in STEM domains is hindered by the scarcity of high-quality, diverse, and verifiable problem sets. Existing synthesis methods, such as Chain-of-Thought prompting, often generate oversimplified or uncheckable data, limiting model advancement on complex tasks. To address these challenges, we introduce SHARP, a unified approach to Synthesizing High-quality Aligned Reasoning Problems for LRMs reinforcement learning with verifiable rewards (RLVR). SHARP encompasses a strategic set of self-alignment principles -- targeting graduate and Olympiad-level difficulty, rigorous logical consistency, and unambiguous, verifiable answers -- and a structured three-phase framework (Alignment, Instantiation, Inference) that ensures thematic diversity and fine-grained control over problem generation. We implement SHARP by leveraging a state-of-the-art LRM to infer and verify challenging STEM questions, then employ a reinforcement learning loop to refine the model's reasoning through verifiable reward signals. Experiments on benchmarks such as GPQA demonstrate that SHARP-augmented training substantially outperforms existing methods, markedly improving complex reasoning accuracy and pushing LRM performance closer to expert-level proficiency. Our contributions include the SHARP strategy, framework design, end-to-end implementation, and experimental evaluation of its effectiveness in elevating LRM reasoning capabilities.
Chengfu Tang、Dingnan Jin、Qing Cui、Jun Zhou、Xiong Jun Wu、Zhenduo Zhang、ZuJie Wen、Zhiqiang Zhang、Wang Ren、Lei Shi、Cai Chen、Deng Zhao、Qing Wang、Xudong Han
自然科学研究方法信息科学、信息技术计算技术、计算机技术
Chengfu Tang,Dingnan Jin,Qing Cui,Jun Zhou,Xiong Jun Wu,Zhenduo Zhang,ZuJie Wen,Zhiqiang Zhang,Wang Ren,Lei Shi,Cai Chen,Deng Zhao,Qing Wang,Xudong Han.SHARP: Synthesizing High-quality Aligned Reasoning Problems for Large Reasoning Models Reinforcement Learning[EB/OL].(2025-05-20)[2025-06-13].https://arxiv.org/abs/2505.14147.点此复制
评论