Understanding the behavior of representation forgetting in continual learning
Understanding the behavior of representation forgetting in continual learning
In continual learning scenarios, catastrophic forgetting of previously learned tasks is a critical issue, making it essential to effectively measure such forgetting. Recently, there has been growing interest in focusing on representation forgetting, the forgetting measured at the hidden layer. In this paper, we provide the first theoretical analysis of representation forgetting and use this analysis to better understand the behavior of continual learning. First, we introduce a new metric called representation discrepancy, which measures the difference between representation spaces constructed by two snapshots of a model trained through continual learning. We demonstrate that our proposed metric serves as an effective surrogate for the representation forgetting while remaining analytically tractable. Second, through mathematical analysis of our metric, we derive several key findings about the dynamics of representation forgetting: the forgetting occurs more rapidly to a higher degree as the layer index increases, while increasing the width of the network slows down the forgetting process. Third, we support our theoretical findings through experiments on real image datasets, including Split-CIFAR100 and ImageNet1K.
Joonkyu Kim、Yejin Kim、Jy-yong Sohn
计算技术、计算机技术
Joonkyu Kim,Yejin Kim,Jy-yong Sohn.Understanding the behavior of representation forgetting in continual learning[EB/OL].(2025-05-27)[2025-06-06].https://arxiv.org/abs/2505.20970.点此复制
评论