|国家预印本平台
首页|Advancing Compositional Awareness in CLIP with Efficient Fine-Tuning

Advancing Compositional Awareness in CLIP with Efficient Fine-Tuning

Advancing Compositional Awareness in CLIP with Efficient Fine-Tuning

来源:Arxiv_logoArxiv
英文摘要

Vision-language models like CLIP have demonstrated remarkable zero-shot capabilities in classification and retrieval. However, these models often struggle with compositional reasoning - the ability to understand the relationships between concepts. A recent benchmark, SugarCrepe++, reveals that previous works on improving compositionality have mainly improved lexical sensitivity but neglected semantic understanding. In addition, downstream retrieval performance often deteriorates, although one would expect that improving compositionality should enhance retrieval. In this work, we introduce CLIC (Compositionally-aware Learning in CLIP), a fine-tuning method based on a novel training technique combining multiple images and their associated captions. CLIC improves compositionality across architectures as well as differently pre-trained CLIP models, both in terms of lexical and semantic understanding, and achieves consistent gains in retrieval performance. This even applies to the recent CLIPS, which achieves SOTA retrieval performance. Nevertheless, the short fine-tuning with CLIC leads to an improvement in retrieval and to the best compositional CLIP model on SugarCrepe++. All our models and code are available at https://clic-compositional-clip.github.io

Amit Peleg、Naman Deep Singh、Matthias Hein

计算技术、计算机技术

Amit Peleg,Naman Deep Singh,Matthias Hein.Advancing Compositional Awareness in CLIP with Efficient Fine-Tuning[EB/OL].(2025-05-30)[2025-07-09].https://arxiv.org/abs/2505.24424.点此复制

评论