|国家预印本平台
首页|Enhancing Efficiency and Exploration in Reinforcement Learning for LLMs

Enhancing Efficiency and Exploration in Reinforcement Learning for LLMs

Enhancing Efficiency and Exploration in Reinforcement Learning for LLMs

来源:Arxiv_logoArxiv
英文摘要

Reasoning large language models (LLMs) excel in complex tasks, which has drawn significant attention to reinforcement learning (RL) for LLMs. However, existing approaches allocate an equal number of rollouts to all questions during the RL process, which is inefficient. This inefficiency stems from the fact that training on simple questions yields limited gains, whereas more rollouts are needed for challenging questions to sample correct answers. Furthermore, while RL improves response precision, it limits the model's exploration ability, potentially resulting in a performance cap below that of the base model prior to RL. To address these issues, we propose a mechanism for dynamically allocating rollout budgets based on the difficulty of the problems, enabling more efficient RL training. Additionally, we introduce an adaptive dynamic temperature adjustment strategy to maintain the entropy at a stable level, thereby encouraging sufficient exploration. This enables LLMs to improve response precision while preserving their exploratory ability to uncover potential correct pathways. The code and data is available on: https://github.com/LiaoMengqi/E3-RL4LLMs

Mengqi Liao、Xiangyu Xi、Ruinian Chen、Jia Leng、Yangen Hu、Ke Zeng、Shuai Liu、Huaiyu Wan

计算技术、计算机技术

Mengqi Liao,Xiangyu Xi,Ruinian Chen,Jia Leng,Yangen Hu,Ke Zeng,Shuai Liu,Huaiyu Wan.Enhancing Efficiency and Exploration in Reinforcement Learning for LLMs[EB/OL].(2025-05-24)[2025-06-06].https://arxiv.org/abs/2505.18573.点此复制

评论