|国家预印本平台
首页|Generalizable Prompt Learning of CLIP: A Brief Overview

Generalizable Prompt Learning of CLIP: A Brief Overview

Generalizable Prompt Learning of CLIP: A Brief Overview

来源:Arxiv_logoArxiv
英文摘要

Existing vision-language models (VLMs) such as CLIP have showcased an impressive capability to generalize well across various downstream tasks. These models leverage the synergy between visual and textual information, enabling them to understand and reason about the content present in images and text in a unified manner. This article provides a brief overview of CLIP based on few-shot prompt learning, including experimental data and technical characteristics of some methods. The purpose of this review is to provide a reference for researchers who have just started their research in generalizable prompting of CLIP through few-shot training for classification across 15 datasets and also to facilitate the integration of this field by researchers in other downstream tasks.

Fangming Cui、Yonggang Zhang、Xuan Wang、Xule Wang、Liang Xiao

计算技术、计算机技术

Fangming Cui,Yonggang Zhang,Xuan Wang,Xule Wang,Liang Xiao.Generalizable Prompt Learning of CLIP: A Brief Overview[EB/OL].(2025-03-03)[2025-06-18].https://arxiv.org/abs/2503.01263.点此复制

评论