|国家预印本平台
首页|AR-GRPO: Training Autoregressive Image Generation Models via Reinforcement Learning

AR-GRPO: Training Autoregressive Image Generation Models via Reinforcement Learning

AR-GRPO: Training Autoregressive Image Generation Models via Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

Inspired by the success of reinforcement learning (RL) in refining large language models (LLMs), we propose AR-GRPO, an approach to integrate online RL training into autoregressive (AR) image generation models. We adapt the Group Relative Policy Optimization (GRPO) algorithm to refine the vanilla autoregressive models' outputs by carefully designed reward functions that evaluate generated images across multiple quality dimensions, including perceptual quality, realism, and semantic fidelity. We conduct comprehensive experiments on both class-conditional (i.e., class-to-image) and text-conditional (i.e., text-to-image) image generation tasks, demonstrating that our RL-enhanced framework significantly improves both the image quality and human preference of generated images compared to the standard AR baselines. Our results show consistent improvements across various evaluation metrics, establishing the viability of RL-based optimization for AR image generation and opening new avenues for controllable and high-quality image synthesis. The source codes and models are available at: https://github.com/Kwai-Klear/AR-GRPO.

Shihao Yuan、Yahui Liu、Yang Yue、Jingyuan Zhang、Wangmeng Zuo、Qi Wang、Fuzheng Zhang、Guorui Zhou

计算技术、计算机技术

Shihao Yuan,Yahui Liu,Yang Yue,Jingyuan Zhang,Wangmeng Zuo,Qi Wang,Fuzheng Zhang,Guorui Zhou.AR-GRPO: Training Autoregressive Image Generation Models via Reinforcement Learning[EB/OL].(2025-08-09)[2025-08-24].https://arxiv.org/abs/2508.06924.点此复制

评论