LLaVA-c: Continual Improved Visual Instruction Tuning
LLaVA-c: Continual Improved Visual Instruction Tuning
Multimodal models like LLaVA-1.5 achieve state-of-the-art visual understanding through visual instruction tuning on multitask datasets, enabling strong instruction-following and multimodal performance. However, multitask learning faces challenges such as task balancing, requiring careful adjustment of data proportions, and expansion costs, where new tasks risk catastrophic forgetting and need costly retraining. Continual learning provides a promising alternative to acquiring new knowledge incrementally while preserving existing capabilities. However, current methods prioritize task-specific performance, neglecting base model degradation from overfitting to specific instructions, which undermines general capabilities. In this work, we propose a simple but effective method with two modifications on LLaVA-1.5: spectral-aware consolidation for improved task balance and unsupervised inquiry regularization to prevent base model degradation. We evaluate both general and task-specific performance across continual pretraining and fine-tuning. Experiments demonstrate that LLaVA-c consistently enhances standard benchmark performance and preserves general capabilities. For the first time, we show that task-by-task continual learning can achieve results that match or surpass multitask joint learning. The code will be publicly released.
Wenzhuo Liu、Fei Zhu、Haiyang Guo、Longhui Wei、Cheng-Lin Liu
计算技术、计算机技术
Wenzhuo Liu,Fei Zhu,Haiyang Guo,Longhui Wei,Cheng-Lin Liu.LLaVA-c: Continual Improved Visual Instruction Tuning[EB/OL].(2025-06-10)[2025-07-16].https://arxiv.org/abs/2506.08666.点此复制
评论