Collapse-Proof Non-Contrastive Self-Supervised Learning
Collapse-Proof Non-Contrastive Self-Supervised Learning
We present a principled and simplified design of the projector and loss function for non-contrastive self-supervised learning based on hyperdimensional computing. We theoretically demonstrate that this design introduces an inductive bias that encourages representations to be simultaneously decorrelated and clustered, without explicitly enforcing these properties. This bias provably enhances generalization and suffices to avoid known training failure modes, such as representation, dimensional, cluster, and intracluster collapses. We validate our theoretical findings on image datasets, including SVHN, CIFAR-10, CIFAR-100, and ImageNet-100. Our approach effectively combines the strengths of feature decorrelation and cluster-based self-supervised learning methods, overcoming training failure modes while achieving strong generalization in clustering and linear classification tasks.
Emanuele Sansone、Tim Lebailly、Tinne Tuytelaars
计算技术、计算机技术
Emanuele Sansone,Tim Lebailly,Tinne Tuytelaars.Collapse-Proof Non-Contrastive Self-Supervised Learning[EB/OL].(2025-07-06)[2025-08-02].https://arxiv.org/abs/2410.04959.点此复制
评论