Papers
Topics
Authors
Recent
Search
2000 character limit reached

Classify and Generate Reciprocally: Simultaneous Positive-Unlabelled Learning and Conditional Generation with Extra Data

Published 14 Jun 2020 in cs.LG and stat.ML | (2006.07841v2)

Abstract: The scarcity of class-labeled data is a ubiquitous bottleneck in many machine learning problems. While abundant unlabeled data typically exist and provide a potential solution, it is highly challenging to exploit them. In this paper, we address this problem by leveraging Positive-Unlabeled~(PU) classification and the conditional generation with extra unlabeled data \emph{simultaneously}. In particular, we present a novel training framework to jointly target both PU classification and conditional generation when exposed to extra data, especially out-of-distribution unlabeled data, by exploring the interplay between them: 1) enhancing the performance of PU classifiers with the assistance of a novel Classifier-Noise-Invariant Conditional GAN~(CNI-CGAN) that is robust to noisy labels, 2) leveraging extra data with predicted labels from a PU classifier to help the generation. Theoretically, we prove the optimal condition of CNI-CGAN, and experimentally, we conducted extensive evaluations on diverse datasets, verifying the simultaneous improvements in both classification and generation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. Learning from positive and unlabeled data: A survey. Machine Learning, 109(4):719–760, 2020.
  2. Mixmatch: A holistic approach to semi-supervised learning. In Advances in Neural Information Processing Systems, pages 5050–5060, 2019.
  3. Generative modeling helps weak supervision (and vice versa). International Conference on Learning Representations, 2023.
  4. Robust conditional generative adversarial networks. arXiv preprint arXiv:1805.08657, 2018.
  5. Convex formulation for learning from positive and unlabeled data. In International conference on machine learning, pages 1386–1394, 2015.
  6. Analysis of learning from positive and unlabeled data. In Advances in neural information processing systems, pages 703–711, 2014.
  7. Charles Elkan. The foundations of cost-sensitive learning. In International joint conference on artificial intelligence, volume 17, pages 973–978. Lawrence Erlbaum Associates Ltd, 2001.
  8. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
  9. Improved training of wasserstein gans. In Advances in neural information processing systems, pages 5767–5777, 2017.
  10. Generative adversarial positive-unlabelled learning. In Jérôme Lang, editor, Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 2255–2261. ijcai.org, 2018. 10.24963/ijcai.2018/312.
  11. Label-noise robust generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2467–2476, 2019.
  12. Learning from positive and unlabeled data with a selection bias. 2018.
  13. Positive-unlabeled learning with non-negative risk estimator. In Advances in neural information processing systems, pages 1675–1685, 2017.
  14. Robust conditional gans under missing or uncertain labels. arXiv preprint arXiv:1906.03579, 2019.
  15. High-fidelity image generation with fewer labels. nternational Conference on Machine Learning (ICML), 2019.
  16. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
  17. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979–1993, 2018.
  18. Generative pseudo-label refinement for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 3130–3139, 2020.
  19. Deep generative positive-unlabeled learning under selection bias. In Proceedings of the 29th ACM international conference on information & knowledge management, pages 1155–1164, 2020.
  20. Image generation from small datasets via batch statistics adaptation. In Proceedings of the IEEE International Conference on Computer Vision, pages 2750–2758, 2019.
  21. Conditional generative positive and unlabeled learning. Expert Systems with Applications, 224:120046, 2023.
  22. Improved techniques for training gans. Advances in Neural Information Processing Systems, 2016.
  23. Patch-level neighborhood interpolation: A general and effective graph-based regularization strategy. arXiv preprint arXiv:1911.09307, 2019.
  24. Robustness of conditional gans to noisy labels. In Advances in neural information processing systems, pages 10271–10282, 2018.
  25. Semi-supervised learning by augmented distribution alignment. In Proceedings of the IEEE International Conference on Computer Vision, pages 1466–1475, 2019.
  26. Generative-discriminative complementary learning. AAAI 2020, 2019.
  27. Multi-positive and unlabeled learning. In IJCAI, pages 3182–3188, 2017.
  28. Effective data augmentation with multi-domain learning gans. arXiv preprint arXiv:1912.11597, 2019.
  29. Wei Li Shaogang Gong Yanbei Chen, Xiatian Zhu. Semi-supervised learning under class distribution mismatch. AAAI 2020, 2019.
  30. Tangent-normal adversarial regularization for semi-supervised learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10676–10684, 2019.
  31. On leveraging pretrained gans for limited-data generation. arXiv preprint arXiv:2002.11810, 2020.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 18 likes about this paper.