Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Self-Learning From Noisy Labels (1908.02160v2)

Published 6 Aug 2019 in cs.CV and cs.LG

Abstract: ConvNets achieve good results when training from clean data, but learning from noisy labels significantly degrades performances and remains challenging. Unlike previous works constrained by many conditions, making them infeasible to real noisy cases, this work presents a novel deep self-learning framework to train a robust network on the real noisy datasets without extra supervision. The proposed approach has several appealing benefits. (1) Different from most existing work, it does not rely on any assumption on the distribution of the noisy labels, making it robust to real noises. (2) It does not need extra clean supervision or accessorial network to help training. (3) A self-learning framework is proposed to train the network in an iterative end-to-end manner, which is effective and efficient. Extensive experiments in challenging benchmarks such as Clothing1M and Food101-N show that our approach outperforms its counterparts in all empirical settings.

Citations (265)

Summary

  • The paper introduces a self-learning framework that iteratively corrects labels using multiple prototypes to enhance deep network performance.
  • It employs cosine similarity and prototype selection to overcome the limitations of single-prototype methods in noisy data.
  • Experimental results on Clothing1M and Food101-N show notable accuracy improvements, surpassing previous state-of-the-art techniques.

Deep Self-Learning From Noisy Labels

The paper "Deep Self-Learning From Noisy Labels" introduces a novel framework for improving the robustness and performance of deep convolutional networks trained on datasets with noisy labels. This work differs from earlier techniques by employing a self-learning framework that dismisses traditional assumptions about noise distribution, reducing the reliance on additional supervision or supplementary models. The authors focus on the practicality and efficiency of their method, circumventing unrealistic constraints often associated with existing approaches.

Framework Overview

The proposed method, Self-Learning with Multi-Prototypes (SMP), operates through iterative training phases that alternate between self-training the network and correcting the labels of the training data. The framework's innovative attribute is its use of multiple prototypes to represent class distributions, addressing the limitation of single-prototype methods. The self-learning approach updates labels and network parameters iteratively, optimizing performance on real-world noisy datasets without external clean-label supervision.

Experimentation and Results

The effectiveness of the method is demonstrated through extensive experimentation on benchmark datasets, namely Clothing1M and Food101-N. The SMP approach showcases a notable improvement in classification accuracy, outperforming previous state-of-the-art methods, such as CleanNet, in both datasets evaluated.

  • Clothing1M Results: When training solely with the 1M noisy dataset, the method achieves an accuracy of 74.45%, surpassing both Joint Optimization and MLNT-Teacher approaches. Including additional verification information further raises the accuracy to 76.44%, reaffirming the technique’s competence without extensive computational demand.
  • Food101-N Results: The obtained accuracy of 85.11% attests to the method's superior performance, demonstrating its efficacy in handling label noise compared to solutions like CleanNet.

Methodological Insights

Key methodologies such as the use of cosine similarity for feature comparison and prototype selection are highlighted. The authors elucidate that images with high-density values are more likely to have correct labels; these images serve as prototypes for label correction—a process yielding significant advances in accuracy over baseline methods.

Implications and Future Directions

The contributions of this work have notable implications for training models on large datasets collected from real-world sources, where label noise can be prevalent. By seamlessly integrating label correction into the training process, this framework provides a robust and versatile solution without necessitating extra computational resources or auxiliary models. This capability significantly lowers the barrier for employing deep networks in practical applications where clean labels are seldom available.

Future research could explore refining the prototype selection process and integrating the framework with other network architectures or tasks beyond classification. There's also potential in exploring the theoretical limits of the multi-prototype approach under different noise conditions, thereby expanding its applicability and effectiveness across varied domains.

In summary, the paper presents a compelling methodology for tackling noisy labels in deep learning, achieving impressive results while minimizing assumptions and dependencies. This core idea fosters a reliable path for applying deep learning models within real-world contexts marred by label inaccuracies.