AmorLIP: Efficient Language-Image Pretraining via Amortization
AmorLIP: Efficient Language-Image Pretraining via Amortization
Contrastive Language-Image Pretraining (CLIP) has demonstrated strong zero-shot performance across diverse downstream text-image tasks. Existing CLIP methods typically optimize a contrastive objective using negative samples drawn from each minibatch. To achieve robust representation learning, these methods require extremely large batch sizes and escalate computational demands to hundreds or even thousands of GPUs. Prior approaches to mitigate this issue often compromise downstream performance, prolong training duration, or face scalability challenges with very large datasets. To overcome these limitations, we propose AmorLIP, an efficient CLIP pretraining framework that amortizes expensive computations involved in contrastive learning through lightweight neural networks, which substantially improves training efficiency and performance. Leveraging insights from a spectral factorization of energy-based models, we introduce novel amortization objectives along with practical techniques to improve training stability. Extensive experiments across 38 downstream tasks demonstrate the superior zero-shot classification and retrieval capabilities of AmorLIP, consistently outperforming standard CLIP baselines with substantial relative improvements of up to 12.24%.
Haotian Sun、Yitong Li、Yuchen Zhuang、Niao He、Hanjun Dai、Bo Dai
信息科学、信息技术计算技术、计算机技术
Haotian Sun,Yitong Li,Yuchen Zhuang,Niao He,Hanjun Dai,Bo Dai.AmorLIP: Efficient Language-Image Pretraining via Amortization[EB/OL].(2025-05-25)[2025-06-24].https://arxiv.org/abs/2505.18983.点此复制
评论