|国家预印本平台
首页|An Augmentation-Aware Theory for Self-Supervised Contrastive Learning

An Augmentation-Aware Theory for Self-Supervised Contrastive Learning

An Augmentation-Aware Theory for Self-Supervised Contrastive Learning

来源:Arxiv_logoArxiv
英文摘要

Self-supervised contrastive learning has emerged as a powerful tool in machine learning and computer vision to learn meaningful representations from unlabeled data. Meanwhile, its empirical success has encouraged many theoretical studies to reveal the learning mechanisms. However, in the existing theoretical research, the role of data augmentation is still under-exploited, especially the effects of specific augmentation types. To fill in the blank, we for the first time propose an augmentation-aware error bound for self-supervised contrastive learning, showing that the supervised risk is bounded not only by the unsupervised risk, but also explicitly by a trade-off induced by data augmentation. Then, under a novel semantic label assumption, we discuss how certain augmentation methods affect the error bound. Lastly, we conduct both pixel- and representation-level experiments to verify our proposed theoretical results.

Jingyi Cui、Hongwei Wen、Yisen Wang

计算技术、计算机技术

Jingyi Cui,Hongwei Wen,Yisen Wang.An Augmentation-Aware Theory for Self-Supervised Contrastive Learning[EB/OL].(2025-05-28)[2025-06-29].https://arxiv.org/abs/2505.22196.点此复制

评论