|国家预印本平台
首页|Concepts or Skills? Rethinking Instruction Selection for Multi-modal Models

Concepts or Skills? Rethinking Instruction Selection for Multi-modal Models

Concepts or Skills? Rethinking Instruction Selection for Multi-modal Models

来源:Arxiv_logoArxiv
英文摘要

Vision-language instruction tuning achieves two main purposes: learning visual concepts and learning visual skills. In this paper, we found that vision-language benchmarks fall into the dichotomy of mainly benefiting from training on instructions with similar skills or visual concepts. Inspired by the discovery, we designed a simple targeted training data selection method to optimize the performance of a given benchmark. We first extract the concepts/skills from the benchmark, determine whether the benchmark predominantly benefits from similar concepts or skills, and finally select instructions with the most matching concepts/skills. Experiments on 10+ benchmarks validate the effectiveness of our targeted data selection method, showing +0.9\% over the best existing baseline averaged over all benchmarks and +1.5\% on the skill-focused subset. Our findings underscore the importance of recognizing the inherent trade-off within instruction selection, which requires balancing the acquisition of conceptual knowledge against visual skill.

Andrew Bai、Justin Cui、Ruochen Wang、Cho-Jui Hsieh

计算技术、计算机技术

Andrew Bai,Justin Cui,Ruochen Wang,Cho-Jui Hsieh.Concepts or Skills? Rethinking Instruction Selection for Multi-modal Models[EB/OL].(2025-08-14)[2025-08-24].https://arxiv.org/abs/2508.10339.点此复制

评论