|国家预印本平台
首页|RePO: Replay-Enhanced Policy Optimization

RePO: Replay-Enhanced Policy Optimization

RePO: Replay-Enhanced Policy Optimization

来源:Arxiv_logoArxiv
英文摘要

Reinforcement learning (RL) is vital for optimizing large language models (LLMs). Recent Group Relative Policy Optimization (GRPO) estimates advantages using multiple on-policy outputs per prompt, leading to high computational costs and low data efficiency. To address this, we introduce Replay-Enhanced Policy Optimization (RePO), which leverages diverse replay strategies to retrieve off-policy samples from a replay buffer, allowing policy optimization based on a broader and more diverse set of samples for each prompt. Experiments on five LLMs across seven mathematical reasoning benchmarks demonstrate that RePO achieves absolute average performance gains of $18.4$ and $4.1$ points for Qwen2.5-Math-1.5B and Qwen3-1.7B, respectively, compared to GRPO. Further analysis indicates that RePO increases computational cost by $15\%$ while raising the number of effective optimization steps by $48\%$ for Qwen3-1.7B, with both on-policy and off-policy sample numbers set to $8$. The repository can be accessed at https://github.com/SihengLi99/RePO.

Siheng Li、Zhanhui Zhou、Wai Lam、Chao Yang、Chaochao Lu

计算技术、计算机技术

Siheng Li,Zhanhui Zhou,Wai Lam,Chao Yang,Chaochao Lu.RePO: Replay-Enhanced Policy Optimization[EB/OL].(2025-06-10)[2025-06-21].https://arxiv.org/abs/2506.09340.点此复制

评论