Token Compression Meets Compact Vision Transformers: A Survey and Comparative Evaluation for Edge AI
Token Compression Meets Compact Vision Transformers: A Survey and Comparative Evaluation for Edge AI
Token compression techniques have recently emerged as powerful tools for accelerating Vision Transformer (ViT) inference in computer vision. Due to the quadratic computational complexity with respect to the token sequence length, these methods aim to remove less informative tokens before the attention layers to improve inference throughput. While numerous studies have explored various accuracy-efficiency trade-offs on large-scale ViTs, two critical gaps remain. First, there is a lack of unified survey that systematically categorizes and compares token compression approaches based on their core strategies (e.g., pruning, merging, or hybrid) and deployment settings (e.g., fine-tuning vs. plug-in). Second, most benchmarks are limited to standard ViT models (e.g., ViT-B, ViT-L), leaving open the question of whether such methods remain effective when applied to structurally compressed transformers, which are increasingly deployed on resource-constrained edge devices. To address these gaps, we present the first systematic taxonomy and comparative study of token compression methods, and we evaluate representative techniques on both standard and compact ViT architectures. Our experiments reveal that while token compression methods are effective for general-purpose ViTs, they often underperform when directly applied to compact designs. These findings not only provide practical insights but also pave the way for future research on adapting token optimization techniques to compact transformer-based networks for edge AI and AI agent applications.
Phat Nguyen、Ngai-Man Cheung
计算技术、计算机技术
Phat Nguyen,Ngai-Man Cheung.Token Compression Meets Compact Vision Transformers: A Survey and Comparative Evaluation for Edge AI[EB/OL].(2025-07-13)[2025-07-23].https://arxiv.org/abs/2507.09702.点此复制
评论