The Relative Gaussian Mechanism and its Application to Private Gradient Descent (2308.15250v2)
Abstract: The Gaussian Mechanism (GM), which consists in adding Gaussian noise to a vector-valued query before releasing it, is a standard privacy protection mechanism. In particular, given that the query respects some L2 sensitivity property (the L2 distance between outputs on any two neighboring inputs is bounded), GM guarantees R\'enyi Differential Privacy (RDP). Unfortunately, precisely bounding the L2 sensitivity can be hard, thus leading to loose privacy bounds. In this work, we consider a Relative L2 sensitivity assumption, in which the bound on the distance between two query outputs may also depend on their norm. Leveraging this assumption, we introduce the Relative Gaussian Mechanism (RGM), in which the variance of the noise depends on the norm of the output. We prove tight bounds on the RDP parameters under relative L2 sensitivity, and characterize the privacy loss incurred by using output-dependent noise. In particular, we show that RGM naturally adapts to a latent variable that would control the norm of the output. Finally, we instantiate our framework to show tight guarantees for Private Gradient Descent, a problem that naturally fits our relative L2 sensitivity assumption.
- Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016a.
- Deep Learning with Differential Privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS ’16, pages 308–318, New York, NY, USA, October 2016b. Association for Computing Machinery. ISBN 978-1-4503-4139-4. doi: 10.1145/2976749.2978318. URL https://doi.org/10.1145/2976749.2978318.
- Bounding User Contributions: A Bias-Variance Trade-off in Differential Privacy. In Proceedings of the 36th International Conference on Machine Learning, pages 263–271. PMLR, May 2019. URL https://proceedings.mlr.press/v97/amin19a.html. ISSN: 2640-3498.
- Differentially Private Learning with Adaptive Clipping. In Advances in Neural Information Processing Systems, volume 34, pages 17455–17466. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/hash/91cff01af640a24e7f9f7a5ab407889f-Abstract.html.
- Private optimization in the interpolation regime: faster rates and hardness results. In Proceedings of the 39th International Conference on Machine Learning, pages 1025–1045. PMLR, June 2022. URL https://proceedings.mlr.press/v162/asi22a.html.
- Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pages 464–473. IEEE, October 2014. ISBN 978-1-4799-6517-5. doi: 10.1109/FOCS.2014.56. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6979031.
- Private Stochastic Convex Optimization with Optimal Rates. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/hash/3bd8fdb090f1f5eb66a00c84dbc5ad51-Abstract.html.
- Propose, test, release: Differentially private estimation with high probability. arXiv preprint arXiv:2002.08774, 2020.
- Average-case averages: Private algorithms for smooth sensitivity and mean estimation. Advances in Neural Information Processing Systems, 32, 2019.
- Composable and versatile privacy via truncated cdp. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pages 74–86, 2018.
- Ijcnn 2001 challenge: Generalization ability and text decoding. In IJCNN’01. International Joint Conference on Neural Networks. Proceedings (Cat. No. 01CH37222), volume 2, pages 1031–1036. IEEE, 2001.
- Understanding Gradient Clipping in Private SGD: A Geometric Perspective. arXiv:2006.15429 [cs, math, stat], June 2020. URL http://arxiv.org/abs/2006.15429.
- Cynthia Dwork. Differential privacy. In Automata, Languages and Programming: 33rd International Colloquium, ICALP 2006, Venice, Italy, July 10-14, 2006, Proceedings, Part II 33, pages 1–12. Springer, 2006.
- Differential privacy and robust statistics. In STOC, 2009.
- Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3, pages 265–284. Springer, 2006.
- The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407, 2014.
- Private stochastic convex optimization: optimal rates in linear time. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, pages 439–449, 2020.
- Revisiting Gradient Clipping: Stochastic bias and tight convergence guarantees, May 2023. URL http://arxiv.org/abs/2305.01588.
- Ilya Mironov. Rényi differential privacy. In 2017 IEEE 30th computer security foundations symposium (CSF), pages 263–275. IEEE, 2017.
- Yurii Nesterov et al. Lectures on convex optimization, volume 137. Springer, 2018.
- Smooth sensitivity and sampling in private data analysis. In STOC, 2007.
- AdaCliP: Adaptive Clipping for Private SGD. arXiv:1908.07643 [cs, stat], October 2019. URL http://arxiv.org/abs/1908.07643.
- Stochastic gradient descent with differentially private updates. In 2013 IEEE Global Conference on Signal and Information Processing, pages 245–248, Austin, TX, USA, December 2013. IEEE.
- Joel A Tropp et al. An introduction to matrix concentration inequalities. Foundations and Trends® in Machine Learning, 8(1-2):1–230, 2015.
- Friendlycore: Practical differentially private aggregation. In International Conference on Machine Learning, pages 21828–21863. PMLR, 2022.
- Salil Vadhan. The complexity of differential privacy. Tutorials on the Foundations of Cryptography: Dedicated to Oded Goldreich, pages 347–450, 2017.
- Differentially private empirical risk minimization revisited: Faster and more general. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in neural information processing systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/f337d999d9ad116a7b4f3d409fcc6480-Paper.pdf.
- Yu-Xiang Wang. Revisiting differentially private linear regression: optimal and adaptive prediction & estimation in unbounded domain. In Amir Globerson and Ricardo Silva, editors, Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence, UAI 2018, Monterey, California, USA, August 6-10, 2018, pages 93–103. AUAI Press, 2018. URL http://auai.org/uai2018/proceedings/papers/40.pdf.
- Normalized/Clipped SGD with Perturbation for Differentially Private Non-Convex Optimization, June 2022. URL http://arxiv.org/abs/2206.13033.
- Opacus: User-friendly differential privacy library in pytorch. arXiv preprint arXiv:2109.12298, 2021.