Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LLP-Bench: A Large Scale Tabular Benchmark for Learning from Label Proportions (2310.10096v2)

Published 16 Oct 2023 in cs.LG and stat.ML

Abstract: In the task of Learning from Label Proportions (LLP), a model is trained on groups (a.k.a bags) of instances and their corresponding label proportions to predict labels for individual instances. LLP has been applied pre-dominantly on two types of datasets - image and tabular. In image LLP, bags of fixed size are created by randomly sampling instances from an underlying dataset. Bags created via this methodology are called random bags. Experimentation on Image LLP has been mostly on random bags on CIFAR-* and MNIST datasets. Despite being a very crucial task in privacy sensitive applications, tabular LLP does not yet have a open, large scale LLP benchmark. One of the unique properties of tabular LLP is the ability to create feature bags where all the instances in a bag have the same value for a given feature. It has been shown in prior research that feature bags are very common in practical, real world applications [Chen et. al '23, Saket et. al. '22]. In this paper, we address the lack of a open, large scale tabular benchmark. First we propose LLP-Bench, a suite of 70 LLP datasets (62 feature bag and 8 random bag datasets) created from the Criteo CTR prediction and the Criteo Sponsored Search Conversion Logs datasets, the former a classification and the latter a regression dataset. These LLP datasets represent diverse ways in which bags can be constructed from underlying tabular data. To the best of our knowledge, LLP-Bench is the first large scale tabular LLP benchmark with an extensive diversity in constituent datasets. Second, we propose four metrics that characterize and quantify the hardness of a LLP dataset. Using these four metrics we present deep analysis of the 62 feature bag datasets in LLP-Bench. Finally we present the performance of 9 SOTA and popular tabular LLP techniques on all the 62 datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. Co-training for demographic classification using deep learning from label proportions. In ICDM, pages 1017–1024, 2017.
  2. Deep learning from label proportions for emphysema quantification. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part II 11, pages 768–776. Springer, 2018a.
  3. Deep learning from label proportions for emphysema quantification. In MICCAI, volume 11071 of Lecture Notes in Computer Science, pages 768–776. Springer, 2018b. URL https://arxiv.org/abs/1807.08601.
  4. Easy learning from label proportions. CoRR, abs/2302.03115, 2023. doi: 10.48550/arXiv.2302.03115. URL https://doi.org/10.48550/arXiv.2302.03115.
  5. Cost-based labeling of groups of mass spectra. In Proceedings of the 2004 ACM SIGMOD international conference on Management of data, pages 167–178, 2004.
  6. Learning from aggregated data: Curated bags versus random bags. CoRR, abs/2305.09557, 2023a. doi: 10.48550/arXiv.2305.09557. URL https://doi.org/10.48550/arXiv.2305.09557.
  7. Learning from aggregated data: Curated bags versus random bags. arXiv preprint arXiv:2305.09557, 2023b.
  8. Kernel k-means based framework for aggregate outputs classification. In Yücel Saygin, Jeffrey Xu Yu, Hillol Kargupta, Wei Wang, Sanjay Ranka, Philip S. Yu, and Xindong Wu, editors, ICDM, pages 356–361, 2009.
  9. Criteo. Kaggle display advertising challenge dataset, 2014. URL http://labs.criteo.com/2014/02/kaggle-display-advertising-challenge-dataset/.
  10. Learning about individuals from group statistics. In UAI, pages 332–339, 2005.
  11. Weakly supervised classification in high energy physics. Journal of High Energy Physics, 2017(5):1–11, 2017.
  12. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml.
  13. Deep multi-class learning from label proportions. CoRR, abs/1905.12909, 2019. URL http://arxiv.org/abs/1905.12909.
  14. Deep multi-class learning from label proportions. arXiv preprint arXiv:1905.12909, 2019.
  15. A large-scale study of web password habits. In Proceedings of the 16th international conference on World Wide Web, pages 657–666, 2007.
  16. Practical lessons from predicting clicks on ads at facebook. In Proceedings of the eighth international workshop on data mining for online advertising, pages 1–9, 2014.
  17. Learning bayesian network classifiers from label proportions. Pattern Recognit., 46(12):3425–3440, 2013.
  18. Fitting the data from embryo implantation prediction: Learning from label proportions. Statistical methods in medical research, 27(4):1056–1066, 2018.
  19. From group to individual labels using deep features. In SIGKDD, pages 597–606, 2015.
  20. Learning from label proportions with generative adversarial networks. In NeurIPS, pages 7167–7177, 2019a.
  21. Learning from label proportions with generative adversarial networks. Advances in neural information processing systems, 32, 2019b.
  22. Two-stage training for learning from label proportions. In Zhi-Hua Zhou, editor, Proc. IJCAI, pages 2737–2743, 2021.
  23. Self-llp: Self-supervised learning from label proportions with self-ensemble. Pattern Recognition, 129:108767, 2022.
  24. Ad click prediction: a view from the trenches. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1222–1230, 2013.
  25. Supervised learning by training on aggregate outputs. In ICDM, pages 252–261. IEEE Computer Society, 2007.
  26. Challenges and approaches to privacy preserving post-click conversion prediction. CoRR, abs/2201.12666, 2022. URL https://arxiv.org/abs/2201.12666.
  27. Quantifying emphysema extent from weakly labeled ct scans of the lungs using label proportions learning. In The Sixth International Workshop on Pulmonary Image Analysis, pages 31–42, 2016.
  28. (almost) no label no cry. In Zoubin Ghahramani, Max Welling, Corinna Cortes, Neil D. Lawrence, and Kilian Q. Weinberger, editors, Advances in Neural Information Processing Systems, pages 190–198, 2014.
  29. Estimating labels from label proportions. J. Mach. Learn. Res., 10:2349–2374, 2009.
  30. Stefan Rüping. SVM classifier estimation from group probabilities. In Johannes Fürnkranz and Thorsten Joachims, editors, ICML, pages 911–918, 2010.
  31. Rishi Saket. Learnability of linear thresholds from label proportions. In NeurIPS, pages 6555–6566, 2021.
  32. Rishi Saket. Algorithms and hardness for learning linear thresholds from label proportions. In NeurIPS, 2022.
  33. On combining bags to better learn from label proportions. In AISTATS, volume 151 of Proceedings of Machine Learning Research, pages 5913–5927. PMLR, 2022. URL https://proceedings.mlr.press/v151/saket22a.html.
  34. Learning from label proportions: A mutual contamination framework. In NeurIPS, 2020.
  35. Autoint: Automatic feature interaction learning via self-attentive neural networks. In CIKM, 2019.
  36. Learning from label proportions by optimizing cluster model selection. In Dimitrios Gunopulos, Thomas Hofmann, Donato Malerba, and Michalis Vazirgiannis, editors, ECML PKDD Proceedings, Part III, volume 6913, pages 349–364. Springer, 2011.
  37. Reacting to variations in product demand: An application for conversion rate (CR) prediction in sponsored search. In 2018 IEEE International Conference on Big Data (Big Data), pages 1856–1864. IEEE, 2018.
  38. Learning from label proportions with consistency regularization. In Asian Conference on Machine Learning, pages 513–528. PMLR, 2020.
  39. K. L. Wagstaff and T. Lane. Salience assignment for multiple-instance regression. In Workshop on Constrained Optimization and Structured Output (ICML), 2007.
  40. Aerosol Optical Depth Prediction from Satellite Observations by Multiple Instance Regression, pages 165–176. 2008.
  41. Using published medical results and non-homogenous data in rule learning. In Proc. International Conference on Machine Learning and Applications and Workshops, volume 2, pages 84–89. IEEE, 2011.
  42. ∝proportional-to\propto∝SVM for learning with label proportions. In ICML, volume 28 of JMLR Workshop and Conference Proceedings, pages 504–512, 2013.
  43. Learning from label proportions by learning with label noise. Advances in Neural Information Processing Systems, 35:26933–26942, 2022.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets