Perception-R1: Pioneering Perception Policy with Reinforcement Learning
Perception-R1: Pioneering Perception Policy with Reinforcement Learning
Inspired by the success of DeepSeek-R1, we explore the potential of rule-based reinforcement learning (RL) in MLLM post-training for perception policy learning. While promising, our initial experiments reveal that incorporating a thinking process through RL does not consistently lead to performance gains across all visual perception tasks. This leads us to delve into the essential role of RL in the context of visual perception. In this work, we return to the fundamentals and explore the effects of RL on different perception tasks. We observe that the perceptual complexity is a major factor in determining the effectiveness of RL. We also observe that reward design plays a crucial role in further approching the upper limit of model perception. To leverage these findings, we propose Perception-R1, a scalable RL framework using GRPO during MLLM post-training. With a standard Qwen2.5-VL-3B-Instruct, Perception-R1 achieves +4.2% on RefCOCO+, +17.9% on PixMo-Count, +4.2% on PageOCR, and notably, 31.9% AP on COCO2017 val for the first time, establishing a strong baseline for perception policy learning.
En Yu、Kangheng Lin、Liang Zhao、Jisheng Yin、Yana Wei、Yuang Peng、Haoran Wei、Jianjian Sun、Chunrui Han、Zheng Ge、Xiangyu Zhang、Daxin Jiang、Jingyu Wang、Wenbing Tao
计算技术、计算机技术
En Yu,Kangheng Lin,Liang Zhao,Jisheng Yin,Yana Wei,Yuang Peng,Haoran Wei,Jianjian Sun,Chunrui Han,Zheng Ge,Xiangyu Zhang,Daxin Jiang,Jingyu Wang,Wenbing Tao.Perception-R1: Pioneering Perception Policy with Reinforcement Learning[EB/OL].(2025-04-10)[2025-04-26].https://arxiv.org/abs/2504.07954.点此复制
评论