|国家预印本平台
首页|Self-Supervised Training Enhances Online Continual Learning

Self-Supervised Training Enhances Online Continual Learning

Self-Supervised Training Enhances Online Continual Learning

来源:Arxiv_logoArxiv
英文摘要

In continual learning, a system must incrementally learn from a non-stationary data stream without catastrophic forgetting. Recently, multiple methods have been devised for incrementally learning classes on large-scale image classification tasks, such as ImageNet. State-of-the-art continual learning methods use an initial supervised pre-training phase, in which the first 10% - 50% of the classes in a dataset are used to learn representations in an offline manner before continual learning of new classes begins. We hypothesize that self-supervised pre-training could yield features that generalize better than supervised learning, especially when the number of samples used for pre-training is small. We test this hypothesis using the self-supervised MoCo-V2, Barlow Twins, and SwAV algorithms. On ImageNet, we find that these methods outperform supervised pre-training considerably for online continual learning, and the gains are larger when fewer samples are available. Our findings are consistent across three online continual learning algorithms. Our best system achieves a 14.95% relative increase in top-1 accuracy on class incremental ImageNet over the prior state of the art for online continual learning.

Christopher Kanan、Tyler L. Hayes、Jhair Gallardo

计算技术、计算机技术

Christopher Kanan,Tyler L. Hayes,Jhair Gallardo.Self-Supervised Training Enhances Online Continual Learning[EB/OL].(2021-03-25)[2025-08-02].https://arxiv.org/abs/2103.14010.点此复制

评论