|国家预印本平台
首页|Perception in Reflection

Perception in Reflection

Perception in Reflection

来源:Arxiv_logoArxiv
英文摘要

We present a perception in reflection paradigm designed to transcend the limitations of current large vision-language models (LVLMs), which are expected yet often fail to achieve perfect perception initially. Specifically, we propose Reflective Perception (RePer), a dual-model reflection mechanism that systematically alternates between policy and critic models, enables iterative refinement of visual perception. This framework is powered by Reflective Perceptual Learning (RPL), which reinforces intrinsic reflective capabilities through a methodically constructed visual reflection dataset and reflective unlikelihood training. Comprehensive experimental evaluation demonstrates RePer's quantifiable improvements in image understanding, captioning precision, and hallucination reduction. Notably, RePer achieves strong alignment between model attention patterns and human visual focus, while RPL optimizes fine-grained and free-form preference alignment. These advancements establish perception in reflection as a robust paradigm for future multimodal agents, particularly in tasks requiring complex reasoning and multi-step manipulation.

Yana Wei、Liang Zhao、Kangheng Lin、En Yu、Yuang Peng、Runpei Dong、Jianjian Sun、Haoran Wei、Zheng Ge、Xiangyu Zhang、Vishal M. Patel

计算技术、计算机技术

Yana Wei,Liang Zhao,Kangheng Lin,En Yu,Yuang Peng,Runpei Dong,Jianjian Sun,Haoran Wei,Zheng Ge,Xiangyu Zhang,Vishal M. Patel.Perception in Reflection[EB/OL].(2025-04-09)[2025-04-27].https://arxiv.org/abs/2504.07165.点此复制

评论