|国家预印本平台
首页|VRPRM: Process Reward Modeling via Visual Reasoning

VRPRM: Process Reward Modeling via Visual Reasoning

VRPRM: Process Reward Modeling via Visual Reasoning

来源:Arxiv_logoArxiv
英文摘要

Process Reward Model (PRM) is widely used in the post-training of Large Language Model (LLM) because it can perform fine-grained evaluation of the reasoning steps of generated content. However, most PRMs lack long-term reasoning and deep thinking capabilities. On the other hand, although a few works have tried to introduce Chain-of-Thought capability into PRMs, the annotation cost of CoT-PRM data is too expensive to play a stable role in various tasks. To address the above challenges, we propose VRPRM, a process reward model via visual reasoning, and design an efficient two-stage training strategy. Experimental results show that using only 3.6K CoT-PRM SFT data and 50K non-CoT PRM RL training data, VRPRM can surpass the non-thinking PRM with a total data volume of 400K and achieved a relative performance improvement of up to 118\% over the base model in the BoN experiment. This result confirms that the proposed combined training strategy can achieve higher quality reasoning capabilities at a lower data annotation cost, thus providing a new paradigm for PRM training with more efficient data utilization.

Xinquan Chen、Bangwei Liu、Xuhong Wang

计算技术、计算机技术

Xinquan Chen,Bangwei Liu,Xuhong Wang.VRPRM: Process Reward Modeling via Visual Reasoning[EB/OL].(2025-08-05)[2025-08-16].https://arxiv.org/abs/2508.03556.点此复制

评论