Look-Back: Implicit Visual Re-focusing in MLLM Reasoning
Look-Back: Implicit Visual Re-focusing in MLLM Reasoning
Multimodal Large Language Models (MLLMs) have achieved remarkable progress in multimodal reasoning. However, they often excessively rely on textual information during the later stages of inference, neglecting the crucial integration of visual input. Current methods typically address this by explicitly injecting visual information to guide the reasoning process. In this work, through an analysis of MLLM attention patterns, we made an intriguing observation: with appropriate guidance, MLLMs can spontaneously re-focus their attention on visual inputs during the later stages of reasoning, even without explicit visual information injection. This spontaneous shift in focus suggests that MLLMs are intrinsically capable of performing visual fusion reasoning. Building on this insight, we introduce Look-Back, an implicit approach designed to guide MLLMs to ``look back" at visual information in a self-directed manner during reasoning. Look-Back empowers the model to autonomously determine when, where, and how to re-focus on visual inputs, eliminating the need for explicit model-structure constraints or additional input. We demonstrate that Look-Back significantly enhances the model's reasoning and perception capabilities, as evidenced by extensive empirical evaluations on multiple multimodal benchmarks.
Shuo Yang、Yuwei Niu、Yuyang Liu、Yang Ye、Bin Lin、Li Yuan
计算技术、计算机技术
Shuo Yang,Yuwei Niu,Yuyang Liu,Yang Ye,Bin Lin,Li Yuan.Look-Back: Implicit Visual Re-focusing in MLLM Reasoning[EB/OL].(2025-07-02)[2025-07-19].https://arxiv.org/abs/2507.03019.点此复制
评论