|国家预印本平台
首页|Weakly-Supervised Contrastive Learning for Imprecise Class Labels

Weakly-Supervised Contrastive Learning for Imprecise Class Labels

Weakly-Supervised Contrastive Learning for Imprecise Class Labels

来源:Arxiv_logoArxiv
英文摘要

Contrastive learning has achieved remarkable success in learning effective representations, with supervised contrastive learning often outperforming self-supervised approaches. However, in real-world scenarios, data annotations are often ambiguous or inaccurate, meaning that class labels may not reliably indicate whether two examples belong to the same class. This limitation restricts the applicability of supervised contrastive learning. To address this challenge, we introduce the concept of ``continuous semantic similarity'' to define positive and negative pairs. Instead of directly relying on imprecise class labels, we measure the semantic similarity between example pairs, which quantifies how closely they belong to the same category by iteratively refining weak supervisory signals. Based on this concept, we propose a graph-theoretic framework for weakly-supervised contrastive learning, where semantic similarity serves as the graph weights. Our framework is highly versatile and can be applied to many weakly-supervised learning scenarios. We demonstrate its effectiveness through experiments in two common settings, i.e., noisy label and partial label learning, where existing methods can be easily integrated to significantly improve performance. Theoretically, we establish an error bound for our approach, showing that it can approximate supervised contrastive learning under mild conditions. The implementation code is available at https://github.com/Speechless-10308/WSC.

Zi-Hao Zhou、Jun-Jie Wang、Tong Wei、Min-Ling Zhang

计算技术、计算机技术

Zi-Hao Zhou,Jun-Jie Wang,Tong Wei,Min-Ling Zhang.Weakly-Supervised Contrastive Learning for Imprecise Class Labels[EB/OL].(2025-05-28)[2025-06-24].https://arxiv.org/abs/2505.22028.点此复制

评论