|国家预印本平台
首页|TopV: Compatible Token Pruning with Inference Time Optimization for Fast and Low-Memory Multimodal Vision Language Model

TopV: Compatible Token Pruning with Inference Time Optimization for Fast and Low-Memory Multimodal Vision Language Model

TopV: Compatible Token Pruning with Inference Time Optimization for Fast and Low-Memory Multimodal Vision Language Model

来源:Arxiv_logoArxiv
英文摘要

Vision-Language Models (VLMs) demand substantial computational resources during inference, largely due to the extensive visual input tokens for representing visual information. Previous studies have noted that visual tokens tend to receive less attention than text tokens, suggesting their lower importance during inference and potential for pruning. However, their methods encounter several challenges: reliance on greedy heuristic criteria for token importance and incompatibility with FlashAttention and KV cache. To address these issues, we introduce \textbf{TopV}, a compatible \textbf{TO}ken \textbf{P}runing with inference Time Optimization for fast and low-memory \textbf{V}LM, achieving efficient pruning without additional training or fine-tuning. Instead of relying on attention scores, we formulate token pruning as an optimization problem, accurately identifying important visual tokens while remaining compatible with FlashAttention. Additionally, since we only perform this pruning once during the prefilling stage, it effectively reduces KV cache size. Our optimization framework incorporates a visual-aware cost function considering factors such as Feature Similarity, Relative Spatial Distance, and Absolute Central Distance, to measure the importance of each source visual token, enabling effective pruning of low-importance tokens. Extensive experiments demonstrate that our method outperforms previous token pruning methods, validating the effectiveness and efficiency of our approach.

Cheng Yang、Yang Sui、Jinqi Xiao、Lingyi Huang、Yu Gong、Chendi Li、Jinghua Yan、Yu Bai、Ponnuswamy Sadayappan、Xia Hu、Bo Yuan

计算技术、计算机技术

Cheng Yang,Yang Sui,Jinqi Xiao,Lingyi Huang,Yu Gong,Chendi Li,Jinghua Yan,Yu Bai,Ponnuswamy Sadayappan,Xia Hu,Bo Yuan.TopV: Compatible Token Pruning with Inference Time Optimization for Fast and Low-Memory Multimodal Vision Language Model[EB/OL].(2025-03-23)[2025-04-24].https://arxiv.org/abs/2503.18278.点此复制

评论