Mimicking or Reasoning: Rethinking Multi-Modal In-Context Learning in Vision-Language Models
Mimicking or Reasoning: Rethinking Multi-Modal In-Context Learning in Vision-Language Models
Vision-language models (VLMs) are widely assumed to exhibit in-context learning (ICL), a property similar to that of their language-only counterparts. While recent work suggests VLMs can perform multimodal ICL (MM-ICL), studies show they often rely on shallow heuristics -- such as copying or majority voting -- rather than true task understanding. We revisit this assumption by evaluating VLMs under distribution shifts, where support examples come from a dataset different from the query. Surprisingly, performance often degrades with more demonstrations, and models tend to copy answers rather than learn from them. To investigate further, we propose a new MM-ICL with Reasoning pipeline that augments each demonstration with a generated rationale alongside the answer. We conduct extensive and comprehensive experiments on both perception- and reasoning-required datasets with open-source VLMs ranging from 3B to 72B and proprietary models such as Gemini 2.0. We conduct controlled studies varying shot count, retrieval method, rationale quality, and distribution. Our results show limited performance sensitivity across these factors, suggesting that current VLMs do not effectively utilize demonstration-level information as intended in MM-ICL.
Chengyue Huang、Yuchen Zhu、Sichen Zhu、Jingyun Xiao、Moises Andrade、Shivang Chopra、Zsolt Kira
计算技术、计算机技术
Chengyue Huang,Yuchen Zhu,Sichen Zhu,Jingyun Xiao,Moises Andrade,Shivang Chopra,Zsolt Kira.Mimicking or Reasoning: Rethinking Multi-Modal In-Context Learning in Vision-Language Models[EB/OL].(2025-06-09)[2025-07-16].https://arxiv.org/abs/2506.07936.点此复制
评论