Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ProPML: Probability Partial Multi-label Learning (2403.07603v1)

Published 12 Mar 2024 in cs.LG

Abstract: Partial Multi-label Learning (PML) is a type of weakly supervised learning where each training instance corresponds to a set of candidate labels, among which only some are true. In this paper, we introduce \our{}, a novel probabilistic approach to this problem that extends the binary cross entropy to the PML setup. In contrast to existing methods, it does not require suboptimal disambiguation and, as such, can be applied to any deep architecture. Furthermore, experiments conducted on artificial and real-world datasets indicate that \our{} outperforms existing approaches, especially for high noise in a candidate set.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. S. G. Armato III, G. McLennan, L. Bidaut, M. F. McNitt-Gray, C. R. Meyer, A. P. Reeves, B. Zhao, D. R. Aberle, C. I. Henschke, E. A. Hoffman et al., “The lung image database consortium (lidc) and image database resource initiative (idri): a completed reference database of lung nodules on ct scans,” Medical physics, vol. 38, no. 2, pp. 915–931, 2011.
  2. Z.-H. Zhou, “A brief introduction to weakly supervised learning,” National Science Review, vol. 5, no. 1, pp. 44–53, 2018.
  3. M.-K. Xie and S.-J. Huang, “Partial multi-label learning with noisy label identification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 7, pp. 3676–3687, 2021.
  4. T. Ridnik, E. Ben-Baruch, N. Zamir, A. Noy, I. Friedman, M. Protter, and L. Zelnik-Manor, “Asymmetric loss for multi-label classification,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 82–91.
  5. F. Sun, M.-K. Xie, and S.-J. Huang, “A deep model for partial multi-label image classification with curriculum based disambiguation,” arXiv preprint arXiv:2207.02410, 2022.
  6. W. Liu, H. Wang, X. Shen, and I. W. Tsang, “The emerging trends of multi-label learning,” IEEE transactions on pattern analysis and machine intelligence, vol. 44, no. 11, pp. 7955–7974, 2021.
  7. Z.-M. Chen, X.-S. Wei, P. Wang, and Y. Guo, “Multi-label image recognition with graph convolutional networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 5177–5186.
  8. L. Sun, S. Feng, T. Wang, C. Lang, and Y. Jin, “Partial multi-label learning by low-rank and sparse decomposition,” in Proceedings of the AAAI conference on artificial intelligence, vol. 33, no. 01, 2019, pp. 5016–5023.
  9. M.-L. Zhang and J.-P. Fang, “Partial multi-label learning via credible label elicitation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 10, pp. 3587–3599, 2020.
  10. M.-L. Zhang, F. Yu, and C.-Z. Tang, “Disambiguation-free partial label learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 29, no. 10, pp. 2155–2167, 2017.
  11. X. Gong, D. Yuan, and W. Bao, “Top-partial label machine,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 11, pp. 6775–6788, 2021.
  12. Y. Yan and Y. Guo, “Partial label learning with batch label correction,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 04, 2020, pp. 6575–6582.
  13. J. Lv, M. Xu, L. Feng, G. Niu, X. Geng, and M. Sugiyama, “Progressive identification of true labels for partial-label learning,” in International Conference on Machine Learning.   PMLR, 2020, pp. 6500–6510.
  14. Ł. Struski, J. Tabor, and B. Zieliński, “Propall: Probabilistic partial label learning,” arXiv preprint arXiv:2208.09931, 2022.
  15. M.-K. Xie and S.-J. Huang, “Partial multi-label learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018.
  16. Z. Lin, M. Chen, and Y. Ma, “The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices,” arXiv preprint arXiv:1009.5055, 2010.
  17. Z.-S. Chen, X. Wu, Q.-G. Chen, Y. Hu, and M.-L. Zhang, “Multi-view partial multi-label learning with graph-based disambiguation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, 2020, pp. 3553–3560.
  18. M.-K. Xie and S.-J. Huang, “Semi-supervised partial multi-label learning,” in 2020 IEEE International Conference on Data Mining (ICDM).   IEEE, 2020, pp. 691–700.
  19. Y. Yan and Y. Guo, “Adversarial partial multi-label learning with label disambiguation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 12, 2021, pp. 10 568–10 576.
  20. M.-L. Zhang and Z.-H. Zhou, “A review on multi-label learning algorithms,” IEEE transactions on knowledge and data engineering, vol. 26, no. 8, pp. 1819–1837, 2013.
  21. M. J. Huiskes and M. S. Lew, “The mir flickr retrieval evaluation,” in Proceedings of the 1st ACM international conference on Multimedia information retrieval, 2008, pp. 39–43.
  22. G. Yu, X. Chen, C. Domeniconi, J. Wang, Z. Li, Z. Zhang, and X. Wu, “Feature-induced partial multi-label learning,” in 2018 IEEE International Conference on Data Mining (ICDM).   IEEE, 2018, pp. 1398–1403.
  23. M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” International journal of computer vision, vol. 88, pp. 303–308, 2009.
  24. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13.   Springer, 2014, pp. 740–755.
  25. Y.-N. Chen and H.-T. Lin, “Feature-aware label space dimension reduction for multi-label classification,” Advances in neural information processing systems, vol. 25, 2012.
  26. M.-L. Zhang and Z.-H. Zhou, “Ml-knn: A lazy learning approach to multi-label learning,” Pattern recognition, vol. 40, no. 7, pp. 2038–2048, 2007.
  27. S. Liu, L. Zhang, X. Yang, H. Su, and J. Zhu, “Query2label: A simple transformer way to multi-label classification,” arXiv preprint arXiv:2107.10834, 2021.
  28. T. Ridnik, H. Lawen, A. Noy, E. Ben Baruch, G. Sharir, and I. Friedman, “Tresnet: High performance gpu-dedicated architecture,” in proceedings of the IEEE/CVF winter conference on applications of computer vision, 2021, pp. 1400–1409.
  29. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition.   Ieee, 2009, pp. 248–255.
  30. E. D. Cubuk, B. Zoph, J. Shlens, and Q. V. Le, “Randaugment: Practical automated data augmentation with a reduced search space,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020, pp. 702–703.
  31. L. N. Smith and N. Topin, “Super-convergence: Very fast training of neural networks using large learning rates,” in Artificial intelligence and machine learning for multi-domain operations applications, vol. 11006.   SPIE, 2019, pp. 369–386.
  32. S. Falkner, A. Klein, and F. Hutter, “Bohb: Robust and efficient hyperparameter optimization at scale,” in International Conference on Machine Learning.   PMLR, 2018, pp. 1437–1446.
  33. J. Demšar, “Statistical comparisons of classifiers over multiple data sets,” The Journal of Machine learning research, vol. 7, pp. 1–30, 2006.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Łukasz Struski (37 papers)
  2. Adam Pardyl (7 papers)
  3. Jacek Tabor (106 papers)
  4. Bartosz Zieliński (42 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets