|国家预印本平台
首页|Adaptive Rank, Reduced Forgetting: Knowledge Retention in Continual Learning Vision-Language Models with Dynamic Rank-Selective LoRA

Adaptive Rank, Reduced Forgetting: Knowledge Retention in Continual Learning Vision-Language Models with Dynamic Rank-Selective LoRA

Adaptive Rank, Reduced Forgetting: Knowledge Retention in Continual Learning Vision-Language Models with Dynamic Rank-Selective LoRA

来源:Arxiv_logoArxiv
英文摘要

We investigate whether the pre-trained knowledge in vision-language models (VLMs), such as CLIP, can be retained -- or even enhanced -- in continual learning (CL) while incorporating new knowledge from the data stream. Existing CL methods primarily focus on continual downstream adaptation using components isolated from pre-trained model (PTM), increasing inference complexity and limiting improvements to the PTM itself; some also retain knowledge relying on additional reference data, leading to high training costs. To address these limitations, we propose a universal and efficient Continual Learning approach for VLM based on Dynamic Rank-Selective LoRA (CoDyRA), which directly improves the PTMs while preserving the existing knowledge from both pre-training and CL. Through analyses on how LoRA rank and placement impact/regularize the learning and forgetting in CL, we propose CoDyRA that adaptively performs rank-minimized parameter updates in different modules, based on their importance to the current data. This ensures a balance between knowledge acquisition (plasticity) and forgetting mitigation (stability). Our method operates without explicit domain or distribution prediction and does not rely on reference data, enabling seamless task integration while maintaining pre-trained capabilities. Moreover, CoDyRA preserves the original model architecture and deployment pipeline, introducing no additional inference overhead. Extensive experiments demonstrate that our approach enhances representations based on new downstream data while retaining pre-trained knowledge, achieving state-of-the-art results.

Kristen Moore、Jason Xue、Haodong Lu、Chongyang Zhao、Lina Yao、Dong Gong

计算技术、计算机技术

Kristen Moore,Jason Xue,Haodong Lu,Chongyang Zhao,Lina Yao,Dong Gong.Adaptive Rank, Reduced Forgetting: Knowledge Retention in Continual Learning Vision-Language Models with Dynamic Rank-Selective LoRA[EB/OL].(2024-12-01)[2025-05-05].https://arxiv.org/abs/2412.01004.点此复制

评论