Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Directional Privacy for Deep Learning (2211.04686v3)

Published 9 Nov 2022 in cs.LG and cs.CR

Abstract: Differentially Private Stochastic Gradient Descent (DP-SGD) is a key method for applying privacy in the training of deep learning models. It applies isotropic Gaussian noise to gradients during training, which can perturb these gradients in any direction, damaging utility. Metric DP, however, can provide alternative mechanisms based on arbitrary metrics that might be more suitable for preserving utility. In this paper, we apply \textit{directional privacy}, via a mechanism based on the von Mises-Fisher (VMF) distribution, to perturb gradients in terms of \textit{angular distance} so that gradient direction is broadly preserved. We show that this provides both $\epsilon$-DP and $\epsilon d$-privacy for deep learning training, rather than the $(\epsilon, \delta)$-privacy of the Gaussian mechanism. Experiments on key datasets then indicate that the VMF mechanism can outperform the Gaussian in the utility-privacy trade-off. In particular, our experiments provide a direct empirical comparison of privacy between the two approaches in terms of their ability to defend against reconstruction and membership inference.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS ’16, pages 308–318, New York, NY, USA. Association for Computing Machinery.
  2. Geo-indistinguishability: Differential privacy for location-based systems. In Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security, pages 901–914. ACM.
  3. Comparing local and central differential privacy using membership inference attacks. In Data and Applications Security and Privacy XXXV: 35th Annual IFIP WG 11.3 Conference, DBSec 2021, Calgary, Canada, July 19–20, 2021, Proceedings, page 22–42, Berlin, Heidelberg. Springer-Verlag.
  4. Broadening the scope of differential privacy using metrics. In Cristofaro, E. D. and Wright, M. K., editors, Privacy Enhancing Technologies - 13th International Symposium, PETS 2013, Bloomington, IN, USA, July 10-12, 2013. Proceedings, volume 7981 of Lecture Notes in Computer Science, pages 82–102. Springer.
  5. Unlocking high-accuracy differentially private image classification through scale.
  6. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci., 9(3–4):211–407.
  7. Generalised differential privacy for text document processing. In Nielson, F. and Sands, D., editors, Principles of Security and Trust - 8th International Conference, POST 2019, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019, Prague, Czech Republic, April 6-11, 2019, Proceedings, volume 11426 of Lecture Notes in Computer Science, pages 123–148. Springer.
  8. Universal optimality and robust utility bounds for metric differential privacy. In 35th IEEE Computer Security Foundations Symposium, CSF 2022, Haifa, Israel, pages 332–347. IEEE Computer Society.
  9. Inverting gradients - how easy is it to break privacy in federated learning? In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H., editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
  10. Deep Learning. MIT Press, Cambridge, MA, USA. http://www.deeplearningbook.org.
  11. Membership inference attacks on machine learning: A survey. ACM Comput. Surv., 54(11s).
  12. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical report.
  13. Evaluating differentially private machine learning in practice. In Heninger, N. and Traynor, P., editors, 28th USENIX Security Symposium, USENIX Security 2019, Santa Clara, CA, USA, August 14-16, 2019, pages 1895–1912. USENIX Association.
  14. Krizhevsky, A. (2009). Learning multiple layers of features from tiny images. Technical report.
  15. Lecun, Y. (1989). Generalization and network design strategies. Elsevier.
  16. Large language models can be strong differentially private learners. In International Conference on Learning Representations.
  17. Adaptive gaussian noise injection regularization for neural networks. In Han, M., Qin, S., and Zhang, N., editors, Advances in Neural Networks - ISNN 2020 - 17th International Symposium on Neural Networks, ISNN 2020, Cairo, Egypt, December 4-6, 2020, Proceedings, volume 12557 of Lecture Notes in Computer Science, pages 176–189. Springer.
  18. Noise-augmented privacy-preserving empirical risk minimization with dual-purpose regularizer and privacy budget retrieval and recycling. In Arai, K., editor, Intelligent Computing, pages 660–681, Cham. Springer International Publishing.
  19. Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models. In 26th Annual Network and Distributed System Security Symposium, NDSS 2019, San Diego, California, USA, February 24-27, 2019. The Internet Society.
  20. PRECODE - A generic model extension to prevent deep gradient leakage. In IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, Waikoloa, HI, USA, January 3-8, 2022, pages 3605–3614. IEEE.
  21. Membership inference attacks against machine learning models. 2017 IEEE Symposium on Security and Privacy (SP), pages 3–18.
  22. Stochastic gradient descent with differentially private updates. In 2013 IEEE Global Conference on Signal and Information Processing, pages 245–248.
  23. Differential privacy for directional data. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pages 1205–1222, New York, NY, USA. Association for Computing Machinery.
  24. A framework for evaluating client privacy leakages in federated learning. In Chen, L., Li, N., Liang, K., and Schneider, S. A., editors, 25th European Symposium on Research in Computer Security, ESORICS 2020, Guildford, UK, September 14-18, 2020, Proceedings, Part I, volume 12308 of Lecture Notes in Computer Science, pages 545–566. Springer.
  25. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. ArXiv, abs/1708.07747.
  26. K-means clustering with local d-privacy for privacy-preserving data analysis. IEEE Transactions on Information Forensics and Security, 17:2524–2537.
  27. Enhanced membership inference attacks against machine learning models. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, CCS ’22, page 3093–3106, New York, NY, USA. Association for Computing Machinery.
  28. Privacy risk in machine learning: Analyzing the connection to overfitting. In 31st IEEE Computer Security Foundations Symposium, CSF 2018, Oxford, United Kingdom, July 9-12, 2018, pages 268–282. IEEE Computer Society.
  29. Deep leakage from gradients. In Wallach, H. M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E. B., and Garnett, R., editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 14747–14756.
Citations (1)

Summary

We haven't generated a summary for this paper yet.