On the Theory and Practice of GRPO: A Trajectory-Corrected Approach with Fast Convergence
On the Theory and Practice of GRPO: A Trajectory-Corrected Approach with Fast Convergence
Group Relative Policy Optimization (GRPO), recently proposed by DeepSeek, is a critic-free reinforcement learning algorithm for fine tuning large language models. It replaces the value function in Proximal Policy Optimization (PPO) with group normalized rewards, while retaining PPO style token level importance sampling based on an old policy. We show that GRPO update rule in fact estimates the policy gradient at the old policy rather than the current one. However, since the old policy is refreshed every few steps, the discrepancy between the two remains small limiting the impact of this bias in practice. We validate this through an ablation study in which importance sampling is entirely removed, and updates are instead performed using the gradient estimated at a fixed old policy across multiple optimization steps. Remarkably, this simplification results in performance comparable to standard GRPO. Motivated by these findings, we propose a new algorithm: Trajectory level Importance Corrected GRPO (TIC GRPO). TIC GRPO replaces token level importance ratios with a single trajectory level probability ratio, yielding an unbiased estimate of the current policy gradient while preserving the critic free structure. Furthermore, we present the first theoretical convergence analysis for GRPO style methods, covering both the original GRPO and our proposed variant.
Lei Pang、Ruinan Jin
计算技术、计算机技术
Lei Pang,Ruinan Jin.On the Theory and Practice of GRPO: A Trajectory-Corrected Approach with Fast Convergence[EB/OL].(2025-08-07)[2025-08-16].https://arxiv.org/abs/2508.02833.点此复制
评论