|国家预印本平台
首页|Low-Rank Few-Shot Adaptation of Vision-Language Models

Low-Rank Few-Shot Adaptation of Vision-Language Models

Low-Rank Few-Shot Adaptation of Vision-Language Models

来源:Arxiv_logoArxiv
英文摘要

Recent progress in the few-shot adaptation of Vision-Language Models (VLMs) has further pushed their generalization capabilities, at the expense of just a few labeled samples within the target downstream task. However, this promising, already quite abundant few-shot literature has focused principally on prompt learning and, to a lesser extent, on adapters, overlooking the recent advances in Parameter-Efficient Fine-Tuning (PEFT). Furthermore, existing few-shot learning methods for VLMs often rely on heavy training procedures and/or carefully chosen, task-specific hyper-parameters, which might impede their applicability. In response, we introduce Low-Rank Adaptation (LoRA) in few-shot learning for VLMs, and show its potential on 11 datasets, in comparison to current state-of-the-art prompt- and adapter-based approaches. Surprisingly, our simple CLIP-LoRA method exhibits substantial improvements, while reducing the training times and keeping the same hyper-parameters in all the target tasks, i.e., across all the datasets and numbers of shots. Certainly, our surprising results do not dismiss the potential of prompt-learning and adapter-based research. However, we believe that our strong baseline could be used to evaluate progress in these emergent subjects in few-shot VLMs.

Maxime Zanella、Ismail Ben Ayed

计算技术、计算机技术

Maxime Zanella,Ismail Ben Ayed.Low-Rank Few-Shot Adaptation of Vision-Language Models[EB/OL].(2024-05-28)[2025-05-12].https://arxiv.org/abs/2405.18541.点此复制

评论