|国家预印本平台
首页|Beyond Freezing: Sparse Tuning Enhances Plasticity in Continual Learning with Pre-Trained Models

Beyond Freezing: Sparse Tuning Enhances Plasticity in Continual Learning with Pre-Trained Models

Beyond Freezing: Sparse Tuning Enhances Plasticity in Continual Learning with Pre-Trained Models

来源:Arxiv_logoArxiv
英文摘要

Continual Learning with Pre-trained Models holds great promise for efficient adaptation across sequential tasks. However, most existing approaches freeze PTMs and rely on auxiliary modules like prompts or adapters, limiting model plasticity and leading to suboptimal generalization when facing significant distribution shifts. While full fine-tuning can improve adaptability, it risks disrupting crucial pre-trained knowledge. In this paper, we propose Mutual Information-guided Sparse Tuning (MIST), a plug-and-play method that selectively updates a small subset of PTM parameters, less than 5%, based on sensitivity to mutual information objectives. MIST enables effective task-specific adaptation while preserving generalization. To further reduce interference, we introduce strong sparsity regularization by randomly dropping gradients during tuning, resulting in fewer than 0.5% of parameters being updated per step. Applied before standard freeze-based methods, MIST consistently boosts performance across diverse continual learning benchmarks. Experiments show that integrating our method into multiple baselines yields significant performance gains. Our code is available at https://github.com/zhwhu/MIST.

Huan Zhang、Fan Lyu、Shuyu Dong、Shenghua Fan、Yujin Zheng、Dingwen Wang

计算技术、计算机技术

Huan Zhang,Fan Lyu,Shuyu Dong,Shenghua Fan,Yujin Zheng,Dingwen Wang.Beyond Freezing: Sparse Tuning Enhances Plasticity in Continual Learning with Pre-Trained Models[EB/OL].(2025-05-26)[2025-06-16].https://arxiv.org/abs/2505.19943.点此复制

评论