Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 179 tok/s Pro
GPT OSS 120B 435 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Not All Learnable Distribution Classes are Privately Learnable (2402.00267v3)

Published 1 Feb 2024 in cs.DS, cs.CR, and stat.ML

Abstract: We give an example of a class of distributions that is learnable up to constant error in total variation distance with a finite number of samples, but not learnable under $(\varepsilon, \delta)$-differential privacy with the same target error. This weakly refutes a conjecture of Ashtiani.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. On the sample complexity of privately learning unbounded high-dimensional gaussians. In Proceedings of the 32nd International Conference on Algorithmic Learning Theory, ALT ’21, pages 185–216. JMLR, Inc., 2021.
  2. Privately learning mixtures of axis-aligned gaussians. In Advances in Neural Information Processing Systems 34, NeurIPS ’21. Curran Associates, Inc., 2021.
  3. Mixtures of gaussians are privately learnable with a polynomial number of samples. arXiv preprint arXiv:2309.03847, 2023.
  4. Polynomial time and private learning of unbounded gaussian mixture models. In Proceedings of the 40th International Conference on Machine Learning, ICML ’23, pages 1018–1040. JMLR, Inc., 2023.
  5. Near-optimal sample complexity bounds for robust learning of gaussian mixtures via compression schemes. Journal of the ACM, 67(6):32:1–32:42, 2020.
  6. Privately estimating a Gaussian: Efficient, robust and optimal. In Proceedings of the 55th Annual ACM Symposium on the Theory of Computing, STOC ’23, New York, NY, USA, 2023. ACM.
  7. Private and polynomial time algorithms for learning Gaussians and beyond. In Proceedings of the 35th Annual Conference on Learning Theory, COLT ’22, pages 1075–1076, 2022.
  8. Private PAC learning implies finite Littlestone dimension. In Proceedings of the 51st Annual ACM Symposium on the Theory of Computing, STOC ’19, pages 852–860, New York, NY, USA, 2019. ACM.
  9. Hassan Ashtiani. Private learning of gaussians and their mixtures. https://www.youtube.com/watch?v=bmNjm0lx50I, July 2022.
  10. Differentially private assouad, fano, and le cam. In Proceedings of the 32nd International Conference on Algorithmic Learning Theory, ALT ’21, pages 48–78. JMLR, Inc., 2021.
  11. From robustness to privacy and back. arXiv preprint arXiv:2302.01855, 2023.
  12. Bounds on the sample complexity for private learning and private data release. Machine Learning, 94(3):401–437, 2014.
  13. Distribution learnability and robustness. In Advances in Neural Information Processing Systems 36, NeurIPS ’23. Curran Associates, Inc., 2023.
  14. Coinpress: Practical private mean and covariance estimation. In Advances in Neural Information Processing Systems 33, NeurIPS ’20, pages 14475–14485. Curran Associates, Inc., 2020.
  15. Private estimation with public data. In Advances in Neural Information Processing Systems 35, NeurIPS ’22. Curran Associates, Inc., 2022.
  16. Private hypothesis selection. In Advances in Neural Information Processing Systems 32, NeurIPS ’19, pages 156–167. Curran Associates, Inc., 2019.
  17. An equivalence between private classification and online prediction. In Proceedings of the 61st Annual IEEE Symposium on Foundations of Computer Science, FOCS ’20, pages 389–402, Washington, DC, USA, 2020. IEEE Computer Society.
  18. Differentially private release and learning of threshold functions. In Proceedings of the 56th Annual IEEE Symposium on Foundations of Computer Science, FOCS ’15, pages 634–649, Washington, DC, USA, 2015. IEEE Computer Society.
  19. Average-case averages: Private algorithms for smooth sensitivity and mean estimation. In Advances in Neural Information Processing Systems 32, NeurIPS ’19, pages 181–191. Curran Associates, Inc., 2019.
  20. The cost of privacy: Optimal rates of convergence for parameter estimation with differential privacy. The Annals of Statistics, 49(5):2825–2850, 2021.
  21. Learning Poisson binomial distributions. In Proceedings of the 44th Annual ACM Symposium on the Theory of Computing, STOC ’12, pages 709–728, New York, NY, USA, 2012. ACM.
  22. Differentially private learning of structured discrete distributions. In Advances in Neural Information Processing Systems 28, NIPS ’15, pages 2566–2574. Curran Associates, Inc., 2015.
  23. Combinatorial methods in density estimation. Springer, 2001.
  24. Calibrating noise to sensitivity in private data analysis. In Proceedings of the 3rd Conference on Theory of Cryptography, TCC ’06, pages 265–284, Berlin, Heidelberg, 2006. Springer.
  25. Sample complexity bounds on differentially private learning via communication complexity. SIAM Journal on Computing, 44(6):1740–1764, 2015.
  26. Robustness implies privacy in statistical estimation. In Proceedings of the 55th Annual ACM Symposium on the Theory of Computing, STOC ’23, New York, NY, USA, 2023. ACM.
  27. Privately learning high-dimensional distributions. In Proceedings of the 32nd Annual Conference on Learning Theory, COLT ’19, pages 1853–1902, 2019.
  28. On the learnability of discrete distributions. In Proceedings of the 26th Annual ACM Symposium on the Theory of Computing, STOC ’94, pages 273–282, New York, NY, USA, 1994. ACM.
  29. A bias-variance-privacy trilemma for statistical estimation. arXiv preprint arXiv:2301.13334, 2023.
  30. New lower bounds for private estimation and a generalized fingerprinting lemma. In Advances in Neural Information Processing Systems 35, NeurIPS ’22. Curran Associates, Inc., 2022.
  31. A private and computationally-efficient estimator for unbounded gaussians. In Proceedings of the 35th Annual Conference on Learning Theory, COLT ’22, pages 544–572, 2022.
  32. Private robust estimation by stabilizing convex relaxations. In Proceedings of the 35th Annual Conference on Learning Theory, COLT ’22, pages 723–777, 2022.
  33. Differentially private algorithms for learning mixtures of separated Gaussians. In Advances in Neural Information Processing Systems 32, NeurIPS ’19, pages 168–180. Curran Associates, Inc., 2019.
  34. A primer on private statistics. arXiv preprint arXiv:2005.00010, 2020.
  35. Finite sample differentially private confidence intervals. In Proceedings of the 9th Conference on Innovations in Theoretical Computer Science, ITCS ’18, pages 44:1–44:9, Dagstuhl, Germany, 2018. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
  36. Impossibility of characterizing distribution learning–a simple solution to a long-standing problem. arXiv preprint arXiv:2304.08712, 2023.
  37. Robust and differentially private mean estimation. In Advances in Neural Information Processing Systems 34, NeurIPS ’21. Curran Associates, Inc., 2021.
  38. Differential privacy and robust statistics in high dimensions. In Proceedings of the 35th Annual Conference on Learning Theory, COLT ’22, pages 1167–1246, 2022.
  39. Smooth sensitivity and sampling in private data analysis. In Proceedings of the 39th Annual ACM Symposium on the Theory of Computing, STOC ’07, pages 75–84, New York, NY, USA, 2007. ACM.
  40. Vikrant Singhal. A polynomial time, pure differentially private estimator for binary product distributions. arXiv preprint arXiv:2304.06787, 2023.
  41. Friendlycore: Practical differentially private aggregation. In Proceedings of the 39th International Conference on Machine Learning, ICML ’22, pages 21828–21863. JMLR, Inc., 2022.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com
Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 7 likes.

Upgrade to Pro to view all of the tweets about this paper: