|国家预印本平台
首页|BlindSight: Harnessing Sparsity for Efficient VLMs

BlindSight: Harnessing Sparsity for Efficient VLMs

BlindSight: Harnessing Sparsity for Efficient VLMs

来源:Arxiv_logoArxiv
英文摘要

Large vision-language models (VLMs) enable the joint processing of text and images. However, the inclusion of vision data significantly expands the prompt length. Along with the quadratic complexity of the attention computation, this results in a longer prefill duration. An approach to mitigate this bottleneck is to leverage the inherent sparsity in the attention computation. In our analysis of attention patterns in VLMs, we observe that a substantial portion of layers exhibit minimal cross-image attention, except through attention-sink tokens per image. These sparse attention patterns fall into distinct categories: sink-only, document mask and a hybrid document-sink mask. Based on this, we propose BlindSight: a training-free approach to optimize VLM inference using a input template-aware attention sparsity mask. We utilize samples from a dataset to derive a prompt-agnostic sparsity categorization for every attention head. We evaluate the proposed technique using VLMs such as Qwen2-VL, Qwen2.5-VL and Gemma-3. BlindSight results in a 32%-41% reduction in FLOPs on average with -2%-+2% accuracy compared to the original model in most evaluated multi-image understanding benchmarks.

Tharun Adithya Srikrishnan、Deval Shah、Steven K. Reinhardt

计算技术、计算机技术

Tharun Adithya Srikrishnan,Deval Shah,Steven K. Reinhardt.BlindSight: Harnessing Sparsity for Efficient VLMs[EB/OL].(2025-07-11)[2025-08-02].https://arxiv.org/abs/2507.09071.点此复制

评论