Flashbacks to Harmonize Stability and Plasticity in Continual Learning
Flashbacks to Harmonize Stability and Plasticity in Continual Learning
We introduce Flashback Learning (FL), a novel method designed to harmonize the stability and plasticity of models in Continual Learning (CL). Unlike prior approaches that primarily focus on regularizing model updates to preserve old information while learning new concepts, FL explicitly balances this trade-off through a bidirectional form of regularization. This approach effectively guides the model to swiftly incorporate new knowledge while actively retaining its old knowledge. FL operates through a two-phase training process and can be seamlessly integrated into various CL methods, including replay, parameter regularization, distillation, and dynamic architecture techniques. In designing FL, we use two distinct knowledge bases: one to enhance plasticity and another to improve stability. FL ensures a more balanced model by utilizing both knowledge bases to regularize model updates. Theoretically, we analyze how the FL mechanism enhances the stability-plasticity balance. Empirically, FL demonstrates tangible improvements over baseline methods within the same training budget. By integrating FL into at least one representative baseline from each CL category, we observed an average accuracy improvement of up to 4.91% in Class-Incremental and 3.51% in Task-Incremental settings on standard image classification benchmarks. Additionally, measurements of the stability-to-plasticity ratio confirm that FL effectively enhances this balance. FL also outperforms state-of-the-art CL methods on more challenging datasets like ImageNet.
Leila Mahmoodi、Peyman Moghadam、Munawar Hayat、Christian Simon、Mehrtash Harandi
计算技术、计算机技术
Leila Mahmoodi,Peyman Moghadam,Munawar Hayat,Christian Simon,Mehrtash Harandi.Flashbacks to Harmonize Stability and Plasticity in Continual Learning[EB/OL].(2025-05-31)[2025-06-15].https://arxiv.org/abs/2506.00477.点此复制
评论