|国家预印本平台
首页|Optimizing Active Learning in Vision-Language Models via Parameter-Efficient Uncertainty Calibration

Optimizing Active Learning in Vision-Language Models via Parameter-Efficient Uncertainty Calibration

Optimizing Active Learning in Vision-Language Models via Parameter-Efficient Uncertainty Calibration

来源:Arxiv_logoArxiv
英文摘要

Active Learning (AL) has emerged as a powerful approach for minimizing labeling costs by selectively sampling the most informative data for neural network model development. Effective AL for large-scale vision-language models necessitates addressing challenges in uncertainty estimation and efficient sampling given the vast number of parameters involved. In this work, we introduce a novel parameter-efficient learning methodology that incorporates uncertainty calibration loss within the AL framework. We propose a differentiable loss function that promotes uncertainty calibration for effectively selecting fewer and most informative data samples for fine-tuning. Through extensive experiments across several datasets and vision backbones, we demonstrate that our solution can match and exceed the performance of complex feature-based sampling techniques while being computationally very efficient. Additionally, we investigate the efficacy of Prompt learning versus Low-rank adaptation (LoRA) in sample selection, providing a detailed comparative analysis of these methods in the context of efficient AL.

Athmanarayanan Lakshmi Narayanan、Amrutha Machireddy、Ranganath Krishnan

计算技术、计算机技术

Athmanarayanan Lakshmi Narayanan,Amrutha Machireddy,Ranganath Krishnan.Optimizing Active Learning in Vision-Language Models via Parameter-Efficient Uncertainty Calibration[EB/OL].(2025-07-29)[2025-08-11].https://arxiv.org/abs/2507.21521.点此复制

评论