Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Combining Self-Supervised and Supervised Learning with Noisy Labels (2011.08145v2)

Published 16 Nov 2020 in cs.CV

Abstract: Since convolutional neural networks (CNNs) can easily overfit noisy labels, which are ubiquitous in visual classification tasks, it has been a great challenge to train CNNs against them robustly. Various methods have been proposed for this challenge. However, none of them pay attention to the difference between representation and classifier learning of CNNs. Thus, inspired by the observation that classifier is more robust to noisy labels while representation is much more fragile, and by the recent advances of self-supervised representation learning (SSRL) technologies, we design a new method, i.e., CS$3$NL, to obtain representation by SSRL without labels and train the classifier directly with noisy labels. Extensive experiments are performed on both synthetic and real benchmark datasets. Results demonstrate that the proposed method can beat the state-of-the-art ones by a large margin, especially under a high noisy level.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
  2. Deep learning, MIT press, 2016.
  3. “Deep residual learning for image recognition,” in CVPR, 2016.
  4. “Going deeper with convolutions,” in CVPR, 2015, pp. 1–9.
  5. “Faster r-cnn: Towards real-time object detection with region proposal networks,” NeurIPS, vol. 28, 2015.
  6. “Mask R-CNN,” in ICCV, 2017.
  7. “Imagenet large scale visual recognition challenge,” IJCV, vol. 115, pp. 211–252, 2015.
  8. “Microsoft coco: Common objects in context,” in ECCV. Springer, 2014, pp. 740–755.
  9. “A closer look at memorization in deep networks,” in ICML. PMLR, 2017, pp. 233–242.
  10. “Exploring generalization in deep learning,” in NeurIPS, 2017.
  11. “Understanding deep learning requires rethinking generalization,” in ICLR, 2017.
  12. “Beyond synthetic noise: Deep learning on controlled noisy labels,” in ICML. PMLR, 2020, pp. 4804–4815.
  13. “A survey of label-noise representation learning: Past, present and future,” Tech. Rep., arXiv:2011.04406, 2020.
  14. “Training convolutional networks with noisy labels,” in ICLR, 2015.
  15. “Learning from massive noisy labeled data for image classification,” in CVPR, 2015.
  16. “Making deep neural networks robust to label noise: A loss correction approach,” in CVPR, 2017.
  17. “Unsupervised label noise modeling and loss correction,” in ICML, 2019.
  18. “Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels,” in ICML. PMLR, 2018, pp. 2304–2313.
  19. “Co-teaching: Robust training of deep neural networks with extremely noisy labels,” in NeurIPS, 2018.
  20. “A simple framework for contrastive learning of visual representations,” in ICML, 2020.
  21. “Momentum contrast for unsupervised visual representation learning,” in CVPR, 2020.
  22. “Contrastive learning of global and local features for medical image segmentation with limited annotations,” NeurIPS, vol. 33, pp. 12546–12558, 2020.
  23. T. M Mitchell, Machine learning, vol. 1, McGraw-hill New York, 1997.
  24. “Bootstrap your own latent-a new approach to self-supervised learning,” NeurIPS, vol. 33, pp. 21271–21284, 2020.
  25. “Dividemix: Learning with noisy labels as semi-supervised learning,” in ICLR, 2020.
  26. D. A Reynolds et al., “Gaussian mixture models.,” Encyclopedia of biometrics, vol. 741, no. 659-663, 2009.
  27. “MixUp: Beyond empirical risk minimization,” in ICLR, 2018.
  28. “Deep self-learning from noisy labels,” in ICCV, 2019.
  29. “Learning multiple layers of features from tiny images,” 2009.
  30. “Webvision database: Visual learning and understanding from web data,” Tech. Rep., 2017.
  31. “Learning to learn from noisy labeled data,” in CVPR, 2019, pp. 5051–5059.
  32. “Early-learning regularization prevents memorization of noisy labels,” NeurIPS, vol. 33, pp. 20331–20342, 2020.
  33. “Augmentation strategies for learning with noisy labels,” in CVPR, 2021, pp. 8022–8031.
  34. “Understanding and utilizing deep neural networks trained with noisy labels,” in ICML, 2019.
  35. “Dimensionality-driven learning with noisy labels,” in ICML, 2018.

Summary

We haven't generated a summary for this paper yet.