Contrastive Learning with Negative Sampling Correction (2401.08690v1)
Abstract: As one of the most effective self-supervised representation learning methods, contrastive learning (CL) relies on multiple negative pairs to contrast against each positive pair. In the standard practice of contrastive learning, data augmentation methods are utilized to generate both positive and negative pairs. While existing works have been focusing on improving the positive sampling, the negative sampling process is often overlooked. In fact, the generated negative samples are often polluted by positive samples, which leads to a biased loss and performance degradation. To correct the negative sampling bias, we propose a novel contrastive learning method named Positive-Unlabeled Contrastive Learning (PUCL). PUCL treats the generated negative samples as unlabeled samples and uses information from positive samples to correct bias in contrastive loss. We prove that the corrected loss used in PUCL only incurs a negligible bias compared to the unbiased contrastive loss. PUCL can be applied to general contrastive learning problems and outperforms state-of-the-art methods on various image and graph classification tasks. The code of PUCL is in the supplementary file.
- Learning from positive and unlabeled data: A survey. Machine Learning, 109(4): 719–760.
- Positive unlabeled learning with class-prior approximation. In Proceedings of IJCAI 2021, 2014–2021.
- A simple framework for contrastive learning of visual representations. In Proceedings of ICML 2020, 1597–1607. PMLR.
- Debiased contrastive learning. Proceedings of NeurIPS 2020.
- An analysis of single-layer networks in unsupervised feature learning. In Proceedings of AISTATS 2011, 215–223.
- Learning classifiers from only positive and unlabeled data. In Proceedings of SIGKDD 2008, 213–220.
- Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, 297–304. JMLR Workshop and Conference Proceedings.
- Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9729–9738.
- Deep residual learning for image recognition. In Proceedings of CVPR 2016, 770–778.
- Supervised contrastive learning. Proceedings of NeurIPS 2020.
- Learning multiple layers of features from tiny images.
- Learning with positive and unlabeled examples using weighted logistic regression. In Proceedings of ICML 2003, volume 3, 448–455.
- Building text classifiers using positive and unlabeled examples. In Proceedings of ICDM 2003, 179–186. IEEE.
- Partially supervised classification of text documents. In Proceedings of ICML 2002, volume 2, 387–394. Sydney, NSW.
- An efficient framework for learning sentence representations.
- Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426.
- Self-supervised learning of pretext-invariant representations. In Proceedings of CVPR 2020, 6707–6717.
- Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint.
- Representation learning with contrastive predictive coding. arXiv preprint.
- On variational lower bounds of mutual information. In NeurIPS Workshop on Bayesian Deep Learning.
- Contrastive learning with hard negative samples. Proceedings of ICLR 2021.
- A theoretical analysis of contrastive unsupervised representation learning.
- Curl: Contrastive unsupervised representations for reinforcement learning. Proceedings of ICML 2020.
- Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. Proceedings of ICLR 2020.
- Contrastive multiview coding. In Proceedings of ECCV 2020, 776–794. Springer.
- Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In Proceedings of ICML 2020, 9929–9939. PMLR.
- Conditional negative sampling for contrastive learning of visual representations. Proceedings of ICLR 2021.
- Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of CVPR 2018, 3733–3742.
- Graph contrastive learning with augmentations. Advances in Neural Information Processing Systems, 33: 5812–5823.
- Lu Wang (329 papers)
- Chao Du (83 papers)
- Pu Zhao (82 papers)
- Chuan Luo (19 papers)
- Zhangchi Zhu (6 papers)
- Bo Qiao (18 papers)
- Wei Zhang (1489 papers)
- Qingwei Lin (81 papers)
- Saravan Rajmohan (85 papers)
- Dongmei Zhang (193 papers)
- Qi Zhang (785 papers)