|国家预印本平台
首页|SPEED-RL: Faster Training of Reasoning Models via Online Curriculum Learning

SPEED-RL: Faster Training of Reasoning Models via Online Curriculum Learning

SPEED-RL: Faster Training of Reasoning Models via Online Curriculum Learning

来源:Arxiv_logoArxiv
英文摘要

Training large language models with reinforcement learning (RL) against verifiable rewards significantly enhances their reasoning abilities, yet remains computationally expensive due to inefficient uniform prompt sampling. We introduce Selective Prompting with Efficient Estimation of Difficulty (SPEED), an adaptive online RL curriculum that selectively chooses training examples of intermediate difficulty to maximize learning efficiency. Theoretically, we establish that intermediate-difficulty prompts improve the gradient estimator's signal-to-noise ratio, accelerating convergence. Empirically, our efficient implementation leads to 2x to 6x faster training without degrading accuracy, requires no manual tuning, and integrates seamlessly into standard RL algorithms.

Daman Arora、Song Mei、Andrea Zanette、Ruiqi Zhang

计算技术、计算机技术

Daman Arora,Song Mei,Andrea Zanette,Ruiqi Zhang.SPEED-RL: Faster Training of Reasoning Models via Online Curriculum Learning[EB/OL].(2025-07-08)[2025-07-16].https://arxiv.org/abs/2506.09016.点此复制

评论