|国家预印本平台
首页|Top-Down Compression: Revisit Efficient Vision Token Projection for Visual Instruction Tuning

Top-Down Compression: Revisit Efficient Vision Token Projection for Visual Instruction Tuning

Top-Down Compression: Revisit Efficient Vision Token Projection for Visual Instruction Tuning

来源:Arxiv_logoArxiv
英文摘要

Visual instruction tuning aims to enable large language models to comprehend the visual world, with a pivotal challenge lying in establishing an effective vision-to-language projection. However, existing methods often grapple with the intractable trade-off between accuracy and efficiency. In this paper, we present LLaVA-Meteor, a novel approach designed to break this deadlock, equipped with a novel Top-Down Compression paradigm that strategically compresses visual tokens without compromising core information. Specifically, we construct a trainable Flash Global Fusion module based on efficient selective state space operators, which aligns the feature space while enabling each token to perceive holistic visual context and instruction preference at low cost. Furthermore, a local-to-single scanning manner is employed to effectively capture local dependencies, thereby enhancing the model's capability in vision modeling. To alleviate computational overhead, we explore a Visual-Native Selection mechanism that independently assesses token significance by both the visual and native experts, followed by aggregation to retain the most critical subset. Extensive experiments show that our approach reduces visual tokens by 75--95% while achieving comparable or superior performance across 12 benchmarks, significantly improving efficiency.

Bonan li、Zicheng Zhang、Songhua Liu、Weihao Yu、Xinchao Wang

计算技术、计算机技术

Bonan li,Zicheng Zhang,Songhua Liu,Weihao Yu,Xinchao Wang.Top-Down Compression: Revisit Efficient Vision Token Projection for Visual Instruction Tuning[EB/OL].(2025-05-17)[2025-07-16].https://arxiv.org/abs/2505.11945.点此复制

评论