Papers
Topics
Authors
Recent
Search
2000 character limit reached

CroSel: Cross Selection of Confident Pseudo Labels for Partial-Label Learning

Published 18 Mar 2023 in cs.LG and cs.AI | (2303.10365v3)

Abstract: Partial-label learning (PLL) is an important weakly supervised learning problem, which allows each training example to have a candidate label set instead of a single ground-truth label. Identification-based methods have been widely explored to tackle label ambiguity issues in PLL, which regard the true label as a latent variable to be identified. However, identifying the true labels accurately and completely remains challenging, causing noise in pseudo labels during model training. In this paper, we propose a new method called CroSel, which leverages historical predictions from the model to identify true labels for most training examples. First, we introduce a cross selection strategy, which enables two deep models to select true labels of partially labeled data for each other. Besides, we propose a novel consistency regularization term called co-mix to avoid sample waste and tiny noise caused by false selection. In this way, CroSel can pick out the true labels of most examples with high precision. Extensive experiments demonstrate the superiority of CroSel, which consistently outperforms previous state-of-the-art methods on benchmark datasets. Additionally, our method achieves over 90\% accuracy and quantity for selecting true labels on CIFAR-type datasets under various settings.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (48)
  1. Unsupervised label noise modeling and loss correction. In ICML, pages 312–321, 2019.
  2. Learning from ambiguously labeled face images. In T-PAMI, pages 1653–1667, 2017.
  3. Pre-trained image processing transformer. In CVPR, pages 12299–12310, 2021.
  4. Mitigating memorization of noisy labels via regularization between representations. In ICLR, 2023.
  5. Learning from partial labels. In JMLR, pages 1501–1536. JMLR. org, 2011.
  6. Leveraging latent label distributions for partial label learning. In IJCAI, pages 2107–2113, 2018.
  7. Provably consistent partial-label learning. In NeurIPS, pages 10948–10960, 2020.
  8. Curriculumnet: Weakly supervised learning from large-scale web images. In ECCV, pages 135–150, 2018.
  9. Progressive stochastic learning for noisy labels. In NeurIPS, pages 5136–5148, 2018a.
  10. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In NeurIPS, 2018b.
  11. Candidate-aware selective disambiguation based on normalized entropy for instance-dependent partial-label learning. In ICCV, pages 1792–1801, 2023.
  12. Long-tailed partial label learning via dynamic rebalancing. In ICLR, 2023.
  13. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML, pages 2304–2313, 2018.
  14. Learning with multiple labels. In NeurIPS, 2002.
  15. Learning multiple layers of features from tiny images. In Technique Report, 2009.
  16. Dividemix: Learning with noisy labels as semi-supervised learning. In arXiv preprint arXiv:2002.07394, 2020.
  17. A conditional multinomial mixture model for superset label learning. In NeurIPS, 2012.
  18. Classification with noisy labels by importance reweighting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(3):447–461, 2015.
  19. Progressive identification of true labels for partial-label learning. In ICML, pages 6500–6510, 2020.
  20. Reading digits in natural images with unsupervised feature learning. In Workwshop on Deep Learning and Unsupervised Feature Learning, 2011.
  21. Representation learning with contrastive predictive coding. In arXiv preprint arXiv:1807.03748, 2018.
  22. An overview of research activities in facial age estimation using the fg-net aging database. In ECCV, 2015.
  23. Methodologic guide for evaluating clinical performance and effect of artificial intelligence technology for medical diagnosis and prediction. In Radiology, pages 800–809, 2018.
  24. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, 2019.
  25. Making deep neural networks robust to label noise: A loss correction approach. In CVPR, pages 1944–1952, 2017.
  26. Decompositional generation process for instance-dependent partial label learning. In ICML, 2023a.
  27. Fredis: A fusion framework of refinement and disambiguation for unreliable partial label learning. In ICML, pages 28321–28336, 2023b.
  28. Openmatch: Open-set semi-supervised learning with open-set consistency regularization. In NeurIPS, pages 25956–25967, 2021.
  29. Pico: Contrastive label disambiguation for partial label learning. In ICLR, 2022.
  30. Co-mining: Deep face recognition with noisy labels. In CVPR, pages 9358–9367, 2019.
  31. Combating noisy labels by agreement: A joint training method with co-regularization. In CVPR, pages 13726–13735, 2020.
  32. To smooth or not? when label smoothing meets noisy labels. In ICML, 2022a.
  33. Learning with noisy labels revisited: A study using real-world human annotations. In ICLR, 2022b.
  34. Self-filtering: A noise-aware sample selection for label noise with confidence penalization. In ECCV, pages 516–532. Springer, 2022c.
  35. Leveraged weighted loss for partial label learning. In ICML, pages 11091–11100, 2021.
  36. Revisiting consistency regularization for deep partial label learning. In ICML, pages 24212–24225, 2022.
  37. Are anchor points really indispensable in label-noise learning? In NeurIPS, 2019.
  38. Privacy-preserving auto-driving: a gan-based approach to protect vehicular camera data. In ICDM, pages 668–677, 2019.
  39. Instance-dependent partial label learning. In NeurIPS, pages 27119–27130, 2021.
  40. Progressive purification for instance-dependent partial label learning. In ICML, pages 38551–38565. PMLR, 2023a.
  41. Progressive purification for instance-dependent partial label learning. In ICML, pages 38551–38565, 2023b.
  42. Dual t: Reducing estimation error for transition matrix in label-noise learning. In NeurIPS, pages 7260–7271, 2020.
  43. Maximum margin partial label learning. In ACML, pages 96–111, 2016.
  44. How does disagreement help generalization against label corruption? In ICML, pages 7164–7173, 2019.
  45. Late stopping: Avoiding confidently learning from mislabeled examples. In ICCV, pages 16079–16088, 2023.
  46. mixup: Beyond empirical risk minimization. In arXiv preprint arXiv:1710.09412, 2017a.
  47. Disambiguation-free partial label learning. IEEE Transactions on Knowledge and Data Engineering, pages 2155–2167, 2017b.
  48. Generalized cross entropy loss for training deep neural networks with noisy labels. In NeurIPS, 2018.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.