TACO: Enhancing Multimodal In-context Learning via Task Mapping-Guided Sequence Configuration
TACO: Enhancing Multimodal In-context Learning via Task Mapping-Guided Sequence Configuration
Multimodal in-context learning (ICL) has emerged as a key mechanism for harnessing the capabilities of large vision-language models (LVLMs). However, its effectiveness remains highly sensitive to the quality of input in-context sequences, particularly for tasks involving complex reasoning or open-ended generation. A major limitation is our limited understanding of how LVLMs actually exploit these sequences during inference. To bridge this gap, we systematically interpret multimodal ICL through the lens of task mapping, which reveals how local and global relationships within and among demonstrations guide model reasoning. Building on this insight, we present TACO, a lightweight transformer-based model equipped with task-aware attention that dynamically configures in-context sequences. By injecting task-mapping signals into the autoregressive decoding process, TACO creates a bidirectional synergy between sequence construction and task reasoning. Experiments on five LVLMs and nine datasets demonstrate that TACO consistently surpasses baselines across diverse ICL tasks. These results position task mapping as a valuable perspective for interpreting and improving multimodal ICL.
Yanshu Li、Tian Yun、Jianjiang Yang、Pinyuan Feng、Jinfa Huang、Ruixiang Tang
计算技术、计算机技术
Yanshu Li,Tian Yun,Jianjiang Yang,Pinyuan Feng,Jinfa Huang,Ruixiang Tang.TACO: Enhancing Multimodal In-context Learning via Task Mapping-Guided Sequence Configuration[EB/OL].(2025-05-21)[2025-07-23].https://arxiv.org/abs/2505.17098.点此复制
评论