Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 81 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 195 tok/s Pro
GPT OSS 120B 462 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Improving the Privacy and Practicality of Objective Perturbation for Differentially Private Linear Learners (2401.00583v1)

Published 31 Dec 2023 in cs.LG and cs.CR

Abstract: In the arena of privacy-preserving machine learning, differentially private stochastic gradient descent (DP-SGD) has outstripped the objective perturbation mechanism in popularity and interest. Though unrivaled in versatility, DP-SGD requires a non-trivial privacy overhead (for privately tuning the model's hyperparameters) and a computational complexity which might be extravagant for simple models such as linear and logistic regression. This paper revamps the objective perturbation mechanism with tighter privacy analyses and new computational tools that boost it to perform competitively with DP-SGD on unconstrained convex generalized linear problems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp.  308–318, 2016.
  2. Differentially private and lazy online convex optimization. In The Thirty Sixth Annual Conference on Learning Theory, pp. 4599–4632. PMLR, 2023.
  3. Improving the gaussian mechanism for differential privacy: Analytical calibration and optimal denoising. In International Conference on Machine Learning, pp. 394–403. PMLR, 2018.
  4. Privacy amplification by subsampling: Tight analyses via couplings and divergences. In Advances in Neural Information Processing Systems, pp. 6277–6287, 2018.
  5. Hypothesis testing interpretations and renyi differential privacy. In International Conference on Artificial Intelligence and Statistics, pp.  2496–2506. PMLR, 2020.
  6. Private empirical risk minimization: Efficient algorithms and tight error bounds. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pp.  464–473. IEEE, 2014.
  7. Composable and versatile privacy via truncated cdp. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pp.  74–86, 2018.
  8. The discrete gaussian for differential privacy. Advances in Neural Information Processing Systems, 33:15676–15688, 2020.
  9. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12(3), 2011.
  10. Calibrating noise to sensitivity in private data analysis. In Theory of cryptography conference, pp.  265–284. Springer, 2006.
  11. Numerical composition of differential privacy. In Advances in Neural Information Processing Systems, 2021.
  12. Towards practical differentially private convex optimization. In 2019 IEEE Symposium on Security and Privacy (SP), pp. 299–316. IEEE, 2019.
  13. Practical and private (deep) learning without sampling or shuffling. In International Conference on Machine Learning, pp. 5213–5225. PMLR, 2021.
  14. Private convex empirical risk minimization and high-dimensional regression. In Conference on Learning Theory, pp.  25–1. JMLR Workshop and Conference Proceedings, 2012.
  15. Computing tight differential privacy guarantees using FFT. In International Conference on Artificial Intelligence and Statistics, pp.  2560–2569. PMLR, 2020.
  16. Tight differential privacy for discrete-valued mechanisms and for the subsampled gaussian mechanism using FFT. In International Conference on Artificial Intelligence and Statistics, pp.  3358–3366. PMLR, 2021.
  17. Private non-smooth erm and sco in subquadratic steps. Advances in Neural Information Processing Systems, 34:4053–4064, 2021.
  18. Private selection from private candidates. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, pp.  298–309, 2019.
  19. A practitioners guide to differentially private convex optimization. 2021.
  20. Mironov, I. Rényi differential privacy. In 2017 IEEE 30th computer security foundations symposium (CSF), pp.  263–275. IEEE, 2017.
  21. Rényi differential privacy of the sampled Gaussian mechanism. arXiv preprint arXiv:1908.10530, 2019.
  22. The role of adaptive optimizers for honest private hyperparameter selection. In Proceedings of the aaai conference on artificial intelligence, volume 36, pp.  7806–7813, 2022.
  23. Oracle efficient private non-convex optimization. In International Conference on Machine Learning, pp. 7243–7252. PMLR, 2020.
  24. Finite-sum smooth optimization with sarah. Computational Optimization and Applications, 82(3):561–593, 2022.
  25. Hyperparameter tuning with Rényi differential privacy. In International Conference on Learning Representations, 2022.
  26. Privately publishable per-instance privacy. Advances in Neural Information Processing Systems, 34, 2021.
  27. Minimizing finite sums with the stochastic average gradient. Mathematical Programming, 162:83–112, 2017.
  28. Privacy loss classes: The central limit theorem in differential privacy. Proceedings on Privacy Enhancing Technologies, 2019(2):245–269, 2019.
  29. Stochastic gradient descent with differentially private updates. In 2013 IEEE Global Conference on Signal and Information Processing, pp.  245–248. IEEE, 2013.
  30. Evading curse of dimensionality in unconstrained private glms via private gradient descent. arXiv preprint arXiv:2006.06783, 2020.
  31. Differentially private empirical risk minimization revisited: Faster and more general. Advances in Neural Information Processing Systems, 30, 2017.
  32. Subsampled rényi differential privacy and analytical moments accountant. In The 22nd International Conference on Artificial Intelligence and Statistics, pp.  1226–1235. PMLR, 2019.
  33. Gradient perturbation is underrated for differentially private convex optimization. arXiv preprint arXiv:1911.11363, 2019.
  34. Poisson subsampled Rényi differential privacy. In International Conference on Machine Learning, pp. 7634–7642, 2019.
  35. Optimal accounting of differential privacy via characteristic function. In International Conference on Artificial Intelligence and Statistics, pp.  4782–4817. PMLR, 2022.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube