|国家预印本平台
首页|TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation

TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation

TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation

来源:Arxiv_logoArxiv
英文摘要

Pioneering token-based works such as Chameleon and Emu3 have established a foundation for multimodal unification but face challenges of high training computational overhead and limited comprehension performance due to a lack of high-level semantics. In this paper, we introduce TokLIP, a visual tokenizer that enhances comprehension by semanticizing vector-quantized (VQ) tokens and incorporating CLIP-level semantics while enabling end-to-end multimodal autoregressive training with standard VQ tokens. TokLIP integrates a low-level discrete VQ tokenizer with a ViT-based token encoder to capture high-level continuous semantics. Unlike previous approaches (e.g., VILA-U) that discretize high-level features, TokLIP disentangles training objectives for comprehension and generation, allowing the direct application of advanced VQ tokenizers without the need for tailored quantization operations. Our empirical results demonstrate that TokLIP achieves exceptional data efficiency, empowering visual tokens with high-level semantic understanding while enhancing low-level generative capacity, making it well-suited for autoregressive Transformers in both comprehension and generation tasks. The code and models are available at https://github.com/TencentARC/TokLIP.

Haokun Lin、Teng Wang、Yixiao Ge、Yuying Ge、Zhichao Lu、Ying Wei、Qingfu Zhang、Zhenan Sun、Ying Shan

计算技术、计算机技术

Haokun Lin,Teng Wang,Yixiao Ge,Yuying Ge,Zhichao Lu,Ying Wei,Qingfu Zhang,Zhenan Sun,Ying Shan.TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation[EB/OL].(2025-05-08)[2025-06-06].https://arxiv.org/abs/2505.05422.点此复制

评论