Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 109 tok/s
Gemini 3.0 Pro 52 tok/s Pro
Gemini 2.5 Flash 159 tok/s Pro
Kimi K2 203 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Twin Contrastive Learning with Noisy Labels (2303.06930v1)

Published 13 Mar 2023 in cs.CV and cs.AI

Abstract: Learning from noisy data is a challenging task that significantly degenerates the model performance. In this paper, we present TCL, a novel twin contrastive learning model to learn robust representations and handle noisy labels for classification. Specifically, we construct a Gaussian mixture model (GMM) over the representations by injecting the supervised model predictions into GMM to link label-free latent variables in GMM with label-noisy annotations. Then, TCL detects the examples with wrong labels as the out-of-distribution examples by another two-component GMM, taking into account the data distribution. We further propose a cross-supervision with an entropy regularization loss that bootstraps the true targets from model predictions to handle the noisy labels. As a result, TCL can learn discriminative representations aligned with estimated labels through mixup and contrastive learning. Extensive experimental results on several standard benchmarks and real-world datasets demonstrate the superior performance of TCL. In particular, TCL achieves 7.5\% improvements on CIFAR-10 with 90\% noisy label -- an extremely noisy scenario. The source code is available at \url{https://github.com/Hzzone/TCL}.

Citations (37)

Summary

  • The paper introduces Twin Contrastive Learning (TCL), a novel framework combining contrastive learning and Gaussian Mixture Models to build classification models robust to noisy labels.
  • TCL utilizes a Gaussian Mixture Model to link noisy labels with latent variables and frames label noise detection as an out-of-distribution problem for better robustness.
  • Experiments show TCL significantly outperforms existing methods, especially at high noise levels, achieving a 7.5% gain on CIFAR-10 with 90% noise.

Twin Contrastive Learning with Noisy Labels

The paper "Twin Contrastive Learning with Noisy Labels" introduces a novel machine learning framework designed to enhance the robustness of classification models in the presence of noisy labels. This research addresses a significant challenge in deploying deep learning models on datasets sourced from environments where labels may be incorrect due to human error, automated processes, or other factors.

Core Contributions

The authors present TCL (Twin Contrastive Learning), an innovative approach that combines principles of contrastive learning with Gaussian mixture models (GMMs) to manage and leverage noisy data. The key components of this framework include:

  1. Gaussian Mixture Model Integration: The method constructs a GMM over representations derived from data, which incorporates model predictions into updating latent variables. This links noisy labels with label-free latent variables, allowing the model to infer clean data despite label noise.
  2. Out-of-Distribution Detection: TCL frames the label noise detection as an out-of-distribution problem, employing a supplementary two-component GMM to identify mislabeled samples. This approach is particularly effective in high noise scenarios, as it considers the entire data distribution rather than just local label noise.
  3. Cross-Supervision with Entropy Regularization: Exploiting the concept of bootstrap cross-supervision, TCL estimates true labels through model predictions on augmented views of data, complemented by entropy regularization. This dual-augmentation strategy enhances the ability to counteract noisy labels by encouraging model consistency and confident predictions.
  4. Robust Representation via Mixup and Contrastive Learning: The framework utilizes mixup augmentation to integrate class structures within the embedding space, ensuring robust representation alignment with estimated labels. This combination significantly elevates the effectiveness of representation learning under label noise.

Experimental Validation

The efficacy of TCL is demonstrated through rigorous experiments on standard benchmark datasets such as CIFAR-10 and CIFAR-100, as well as real-world datasets like WebVision and Clothing1M. Results indicate that TCL outperforms existing methods, especially under extreme noise conditions. For instance, TCL achieved a notable 7.5% improvement on CIFAR-10 with 90% noise in labels, underscoring its capability to handle highly corrupted training data.

Implications and Future Directions

The paper suggests substantial implications for both theoretical and practical aspects of AI and machine learning:

  • Theoretical: TCL extends the understanding of contrastive learning by integrating it with probabilistic models for label noise detection. This interdisciplinary fusion highlights potential pathways for future research in noise robustness and unsupervised learning enhancements.
  • Practical: On a practical level, TCL could be integrated into pipelines for industries relying on large-scale dataset annotation, such as automated image classification, medical imaging, and any field with inherent label uncertainties.

Looking forward, further exploration could focus on refining data distribution modeling in TCL to accommodate various types of label noise better and examining adaptive mechanisms that dynamically optimize the contrastive learning process in diverse data landscapes.

In summary, TCL represents a significant advancement in noise-tolerant machine learning, providing a robust framework that leverages statistical modeling alongside contrastive learning principles to effectively address the challenge of noisy labels. Through detailed experimentation, the method exhibits marked resilience under challenging conditions, setting a benchmark for future innovations in learning from imperfect data.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Github Logo Streamline Icon: https://streamlinehq.com