|国家预印本平台
首页|GPG: A Simple and Strong Reinforcement Learning Baseline for Model Reasoning

GPG: A Simple and Strong Reinforcement Learning Baseline for Model Reasoning

GPG: A Simple and Strong Reinforcement Learning Baseline for Model Reasoning

来源:Arxiv_logoArxiv
英文摘要

Reinforcement Learning (RL) can directly enhance the reasoning capabilities of large language models without extensive reliance on Supervised Fine-Tuning (SFT). In this work, we revisit the traditional Policy Gradient (PG) mechanism and propose a minimalist RL approach termed Group Policy Gradient (GPG). Unlike conventional methods, GPG directly optimize the original RL objective, thus obviating the need for surrogate loss functions. By eliminating the critic and reference models, avoiding KL divergence constraints, and addressing the advantage and gradient estimation bias, our approach significantly simplifies the training process compared to Group Relative Policy Optimization (GRPO). Our approach achieves superior performance without relying on auxiliary techniques or adjustments. As illustrated in Figure 1, extensive experiments demonstrate that our method not only reduces computational costs but also consistently outperforms GRPO across various unimodal and multimodal tasks. Our code is available at https://github.com/AMAP-ML/GPG.

Xiangxiang Chu、Hailang Huang、Xiao Zhang、Fei Wei、Yong Wang

计算技术、计算机技术

Xiangxiang Chu,Hailang Huang,Xiao Zhang,Fei Wei,Yong Wang.GPG: A Simple and Strong Reinforcement Learning Baseline for Model Reasoning[EB/OL].(2025-04-03)[2025-04-30].https://arxiv.org/abs/2504.02546.点此复制

评论