Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning from Aggregate responses: Instance Level versus Bag Level Loss Functions (2401.11081v1)

Published 20 Jan 2024 in cs.LG, cs.AI, math.ST, stat.ML, and stat.TH

Abstract: Due to the rise of privacy concerns, in many practical applications the training data is aggregated before being shared with the learner, in order to protect privacy of users' sensitive responses. In an aggregate learning framework, the dataset is grouped into bags of samples, where each bag is available only with an aggregate response, providing a summary of individuals' responses in that bag. In this paper, we study two natural loss functions for learning from aggregate responses: bag-level loss and the instance-level loss. In the former, the model is learnt by minimizing a loss between aggregate responses and aggregate model predictions, while in the latter the model aims to fit individual predictions to the aggregate responses. In this work, we show that the instance-level loss can be perceived as a regularized form of the bag-level loss. This observation lets us compare the two approaches with respect to bias and variance of the resulting estimators, and introduce a novel interpolating estimator which combines the two approaches. For linear regression tasks, we provide a precise characterization of the risk of the interpolating estimator in an asymptotic regime where the size of the training set grows in proportion to the features dimension. Our analysis allows us to theoretically understand the effect of different factors, such as bag size on the model prediction risk. In addition, we propose a mechanism for differentially private learning from aggregate responses and derive the optimal bag size in terms of prediction risk-privacy trade-off. We also carry out thorough experiments to corroborate our theory and show the efficacy of the interpolating estimator.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. Privacy-preserving machine learning: Threats and solutions. IEEE Security & Privacy, 17(2):49–58, 2019.
  2. Apple Developer Documentation. Apple storekit ad network. https://developer.apple.com/documentation/storekit/skadnetwork/, 2023.
  3. Co-training for demographic classification using deep learning from label proportions. In 2017 IEEE International Conference on Data Mining Workshops (ICDMW), pp.  1017–1024. IEEE, 2017.
  4. Easy learning from label proportions. arXiv preprint arXiv:2302.03115, 2023.
  5. Sample complexity bounds for differentially private learning. In Proceedings of the 24th Annual Conference on Learning Theory, pp.  155–186. JMLR Workshop and Conference Proceedings, 2011.
  6. Learning from aggregated data: Curated bags versus random bags. arXiv preprint arXiv:2305.09557, 2023.
  7. Learning with label proportions based on nonparallel support vector machines. Knowledge-Based Systems, 119:126–141, 2017.
  8. Criteo Privacy Preserving ML Competition. Criteo privacy preserving ML competition at AdKDD 2021. http://go.criteo.net/criteo-ppml-challenge-adkdd21-dataset.zip, 2021.
  9. Inverse extreme learning machine for learning with label proportions. In 2017 IEEE International Conference on Big Data (Big Data), pp.  576–585. IEEE, 2017.
  10. Emiliano De Cristofaro. An overview of privacy in machine learning. arXiv preprint arXiv:2005.08679, 2020.
  11. Deep multi-class learning from label proportions. arXiv preprint arXiv:1905.12909, 2019.
  12. Our data, ourselves: Privacy via distributed noise generation. In Annual international conference on the theory and applications of cryptographic techniques, pp.  486–503. Springer, 2006a.
  13. Calibrating noise to sensitivity in private data analysis. In Theory of cryptography conference, pp.  265–284. Springer, 2006b.
  14. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3–4):211–407, 2014.
  15. On the complexity of learning from label proportions. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pp.  1675–1681, 2017.
  16. Yehoram Gordon. On milman’s inequality and random subspaces which escape through a mesh in ℝnsuperscriptℝ𝑛\mathbb{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. In Geometric Aspects of Functional Analysis: Israel Seminar (GAFA) 1986–87, pp.  84–106. Springer, 1988.
  17. Hedonic housing prices and the demand for clean air. Journal of environmental economics and management, 5(1):81–102, 1978.
  18. The curse of overparametrization in adversarial training: Precise analysis of robust generalization for random features regression. arXiv preprint arXiv:2201.05149, 2022.
  19. Surprises in high-dimensional ridgeless least squares interpolation. Annals of statistics, 50(2):949, 2022.
  20. Precise tradeoffs in adversarial training for linear regression. In Conference on Learning Theory, pp.  2034–2078. PMLR, 2020.
  21. Yue Li and Bo Wang. A study on customer churn of commercial banks based on learning from label proportions. In 2018 IEEE International Conference on Data Mining Workshops (ICDMW), pp.  1241–1247. IEEE, 2018.
  22. On the minimal supervision for training any binary classifier from only unlabeled data. In International Conference on Learning Representations, 2019.
  23. The generalization error of random features regression: Precise asymptotics and the double descent curve. Communications on Pure and Applied Mathematics, 75(4):667–766, 2022.
  24. Dropout training, data-dependent regularization, and generalization bounds. In International conference on machine learning, pp. 3645–3653. PMLR, 2018.
  25. Supervised learning by training on aggregate outputs. In Seventh IEEE International Conference on Data Mining (ICDM 2007), pp.  252–261. IEEE, 2007.
  26. Towards the science of security and privacy in machine learning. arXiv preprint arXiv:1611.03814, 2016.
  27. (almost) no label no cry. Advances in Neural Information Processing Systems, 27, 2014.
  28. Learning with label proportions via npsvm. IEEE transactions on cybernetics, 47(10):3293–3305, 2016.
  29. Adaboost-llp: a boosting method for learning with label proportions. IEEE transactions on neural networks and learning systems, 29(8):3548–3559, 2017.
  30. Estimating labels from label proportions. In Proceedings of the 25th international conference on Machine learning, pp.  776–783, 2008.
  31. Stefan Rueping. Svm classifier estimation from group probabilities. In Proceedings of the 27th international conference on machine learning (ICML-10), pp.  911–918, 2010.
  32. Rishi Saket. Learnability of linear thresholds from label proportions. Advances in Neural Information Processing Systems, 34:6555–6566, 2021.
  33. On combining bags to better learn from label proportions. In International Conference on Artificial Intelligence and Statistics, pp.  5913–5927. PMLR, 2022.
  34. Learning from label proportions: A mutual contamination framework. Advances in neural information processing systems, 33:22256–22267, 2020.
  35. Learning from label proportions on high-dimensional data. Neural Networks, 103:9–18, 2018.
  36. Learning from label proportions with pinball loss. International Journal of Machine Learning and Cybernetics, 10(1):187–205, 2019.
  37. Maximum relative margin and data-dependent regularization. Journal of Machine Learning Research, 11(2), 2010.
  38. Pooled testing for expanding covid-19 mass surveillance. Disaster Medicine and Public Health Preparedness, 14(3):e42–e43, 2020.
  39. Regularized linear regression: A precise analysis of the estimation error. In Conference on Learning Theory, pp.  1683–1709. PMLR, 2015.
  40. Roman Vershynin. High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press, 2018.
  41. Pooled testing for hiv screening: capturing the dilution effect. Operations Research, 44(4):543–569, 1996.
  42. A new transfer learning-based method for label proportions problem. Information Sciences, 541:391–408, 2020.
  43. ∝proportional-to\propto∝SVM for learning with label proportions. In International conference on machine learning, pp. 504–512. PMLR, 2013.
  44. On learning from label proportions. arXiv preprint arXiv:1402.5902, 2014.
  45. Learning from aggregate observations. Advances in Neural Information Processing Systems, 33:7993–8005, 2020.
  46. Learning neural networks with adaptive regularization. Advances in Neural Information Processing Systems, 32, 2019.
Citations (2)

Summary

We haven't generated a summary for this paper yet.