|国家预印本平台
首页|Meta-Adaptive Prompt Distillation for Few-Shot Visual Question Answering

Meta-Adaptive Prompt Distillation for Few-Shot Visual Question Answering

Meta-Adaptive Prompt Distillation for Few-Shot Visual Question Answering

来源:Arxiv_logoArxiv
英文摘要

Large Multimodal Models (LMMs) often rely on in-context learning (ICL) to perform new tasks with minimal supervision. However, ICL performance, especially in smaller LMMs, is inconsistent and does not always improve monotonically with increasing examples. We hypothesize that this occurs due to the LMM being overwhelmed by additional information present in the image embeddings, which is not required for the downstream task. To address this, we propose a meta-learning approach that provides an alternative for inducing few-shot capabilities in LMMs, using a fixed set of soft prompts that are distilled from task-relevant image features and can be adapted at test time using a few examples. To facilitate this distillation, we introduce an attention-mapper module that can be easily integrated with the popular LLaVA v1.5 architecture and is jointly learned with soft prompts, enabling task adaptation in LMMs under low-data regimes with just a few gradient steps. Evaluation on the VL-ICL Bench shows that our method consistently outperforms ICL and related prompt-tuning approaches, even under image perturbations, improving task induction and reasoning across visual question answering tasks.

Akash Gupta、Amos Storkey、Mirella Lapata

计算技术、计算机技术

Akash Gupta,Amos Storkey,Mirella Lapata.Meta-Adaptive Prompt Distillation for Few-Shot Visual Question Answering[EB/OL].(2025-06-07)[2025-06-30].https://arxiv.org/abs/2506.06905.点此复制

评论