Dual Perspectives on Non-Contrastive Self-Supervised Learning
Dual Perspectives on Non-Contrastive Self-Supervised Learning
The objective of non-contrastive approaches to self-supervised learning is to train on pairs of different views of the data an encoder and a predictor that minimize the mean discrepancy between the code predicted from the embedding of the first view and the embedding of the second one. In this setting, the stop gradient and exponential moving average iterative procedures are commonly used to avoid representation collapse, with excellent performance in downstream supervised applications. This presentation investigates these procedures from the dual theoretical viewpoints of optimization and dynamical systems. We first show that, in general, although they do not optimize the original objective, or for that matter, any other smooth function, they do avoid collapse. Following Tian et al. [2021], but without any of the extra assumptions used in their proofs, we then show using a dynamical system perspective that, in the linear case, minimizing the original objective function without the use of a stop gradient or exponential moving average always leads to collapse. Conversely, we finally show that the limit points of the dynamical systems associated with these two procedures are, in general, asymptotically stable equilibria, with no risk of degenerating to trivial solutions.
Jean Ponce、Martial Hebert、Basile Terver
计算技术、计算机技术
Jean Ponce,Martial Hebert,Basile Terver.Dual Perspectives on Non-Contrastive Self-Supervised Learning[EB/OL].(2025-06-18)[2025-07-21].https://arxiv.org/abs/2507.01028.点此复制
评论