|国家预印本平台
首页|Discovering Global False Negatives On the Fly for Self-supervised Contrastive Learning

Discovering Global False Negatives On the Fly for Self-supervised Contrastive Learning

Discovering Global False Negatives On the Fly for Self-supervised Contrastive Learning

来源:Arxiv_logoArxiv
英文摘要

In self-supervised contrastive learning, negative pairs are typically constructed using an anchor image and a sample drawn from the entire dataset, excluding the anchor. However, this approach can result in the creation of negative pairs with similar semantics, referred to as "false negatives", leading to their embeddings being falsely pushed apart. To address this issue, we introduce GloFND, an optimization-based approach that automatically learns on the fly the threshold for each anchor data to identify its false negatives during training. In contrast to previous methods for false negative discovery, our approach globally detects false negatives across the entire dataset rather than locally within the mini-batch. Moreover, its per-iteration computation cost remains independent of the dataset size. Experimental results on image and image-text data demonstrate the effectiveness of the proposed method. Our implementation is available at https://github.com/vibalcam/GloFND.

Vicente Balmaseda、Bokun Wang、Ching-Long Lin、Tianbao Yang

计算技术、计算机技术

Vicente Balmaseda,Bokun Wang,Ching-Long Lin,Tianbao Yang.Discovering Global False Negatives On the Fly for Self-supervised Contrastive Learning[EB/OL].(2025-06-25)[2025-07-20].https://arxiv.org/abs/2502.20612.点此复制

评论