Helping CLIP See Both the Forest and the Trees: A Decomposition and Description Approach
Helping CLIP See Both the Forest and the Trees: A Decomposition and Description Approach
Vision-Language Models (VLMs) like CLIP achieve cross-modal semantic alignment through contrastive learning, exhibiting robust zero-shot generalization. Traditional prompt engineering, however, predominantly relies on coarse-grained category labels, neglecting fine-grained local semantics. Existing approaches assume that VLMs inherently recognize localized visual details and attempt to enhance classification by augmenting text prompts with attribute descriptors generated by large language models. However, our systematic experiments reveal critical limitations: CLIP's strong bias toward global image patterns hinders its ability to process localized visual descriptors. To address this fundamental constraint, we propose a simple, effective, and plug-and-play solution that enables CLIP to ``See Both the Forest and the Trees." Specifically, we employ stochastic multi-crop augmentation to activate CLIP's latent capacity for localized feature analysis. By cropping only partial regions, the approach effectively constrains the model's receptive field and recalibrates its attention mechanism, thereby mitigating its inherent bias. We evaluate the proposed method under zero-shot, few-shot, and test-time adaptation settings, and extensive experiments demonstrate that D&D achieves promising performance.
Leyan Xue、Zongbo Han、Guangyu Wang、Qinghua Hu、Mingyue Cheng、Changqing Zhang
计算技术、计算机技术
Leyan Xue,Zongbo Han,Guangyu Wang,Qinghua Hu,Mingyue Cheng,Changqing Zhang.Helping CLIP See Both the Forest and the Trees: A Decomposition and Description Approach[EB/OL].(2025-07-04)[2025-07-16].https://arxiv.org/abs/2507.03458.点此复制
评论