AutoV: Learning to Retrieve Visual Prompt for Large Vision-Language Models
AutoV: Learning to Retrieve Visual Prompt for Large Vision-Language Models
Inspired by text prompts in large language models (LLMs), visual prompts have been explored to enhance the reasoning capabilities of large vision-language models (LVLMs). Current methods design heuristic visual prompts, such as overlaying a text-query-guided attention heatmap on the original input image. However, designing effective prompts manually is challenging and time-consuming, and it often fails to explore the benefits of different visual prompts, leading to sub-optimal performance. To this end, we propose \textbf{AutoV} that learns to automatically select the optimal visual prompt from various candidates based on given textual queries and the input image. To train AutoV, we developed an automatic data collection and labeling pipeline that evaluates various visual prompts with a pre-trained LVLM. We input a set of visual prompts into the LVLM and rank them according to the prediction losses generated by the model. Using the ranking as a supervision signal, we train AutoV to automatically choose the optimal visual prompt from various visual prompts for LVLMs. Experimental results indicate that AutoV enhances the performance of various LVLMs across multiple popular image understanding tasks. For instance, LLaVA-OV with AutoV achieves $\textbf{1.7}\%$ accuracy gain on LLaVA$^{\text{Wild}}$, and AutoV boosts Qwen2.5-VL by $\textbf{1.9}\%$ on MMMU, highlighting its potential as an optimal visual prompting method for LVLMs.
Yuan Zhang、Chun-Kai Fan、Tao Huang、Ming Lu、Sicheng Yu、Junwen Pan、Kuan Cheng、Qi She、Shanghang Zhang
计算技术、计算机技术
Yuan Zhang,Chun-Kai Fan,Tao Huang,Ming Lu,Sicheng Yu,Junwen Pan,Kuan Cheng,Qi She,Shanghang Zhang.AutoV: Learning to Retrieve Visual Prompt for Large Vision-Language Models[EB/OL].(2025-06-19)[2025-08-02].https://arxiv.org/abs/2506.16112.点此复制
评论