Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Differentially Private and Adversarially Robust Machine Learning: An Empirical Evaluation (2401.10405v1)

Published 18 Jan 2024 in cs.LG

Abstract: Malicious adversaries can attack machine learning models to infer sensitive information or damage the system by launching a series of evasion attacks. Although various work addresses privacy and security concerns, they focus on individual defenses, but in practice, models may undergo simultaneous attacks. This study explores the combination of adversarial training and differentially private training to defend against simultaneous attacks. While differentially-private adversarial training, as presented in DP-Adv, outperforms the other state-of-the-art methods in performance, it lacks formal privacy guarantees and empirical validation. Thus, in this work, we benchmark the performance of this technique using a membership inference attack and empirically show that the resulting approach is as private as non-robust private models. This work also highlights the need to explore privacy guarantees in dynamic training paradigms.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, 308–318.
  2. Anonymous ICLR22 Reviewers. 2022. Review of ICLR22 submitted manuscript: Practical Adversarial Training with Differential Privacy for Deep Learning. https://openreview.net/forum?id=1hw-h1C8bch.
  3. Practical Adversarial Training with Differential Privacy for Deep Learning.
  4. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12(3).
  5. Deng, L. 2012. The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE signal processing magazine, 29(6): 141–142.
  6. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
  7. Private convex empirical risk minimization and high-dimensional regression. In Conference on Learning Theory, 25–1. JMLR Workshop and Conference Proceedings.
  8. Learning multiple layers of features from tiny images.
  9. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
  10. Scalable differential privacy with certified robustness in adversarial learning. In International Conference on Machine Learning, 7683–7694. PMLR.
  11. Overfitting in adversarially robust deep learning. In International Conference on Machine Learning, 8093–8104. PMLR.
  12. Learning in a large function space: Privacy-preserving mechanisms for SVM learning. arXiv preprint arXiv:0911.5708.
  13. White-box vs black-box: Bayes optimal strategies for membership inference. In International Conference on Machine Learning, 5558–5567. PMLR.
  14. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), 3–18. IEEE.
  15. Privacy risks of securing machine learning models against adversarial examples. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 241–257.
  16. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
  17. Stealing machine learning models via prediction APIs. In 25th USENIX security symposium (USENIX Security 16), 601–618.
  18. Robustness threats of differential privacy. arXiv preprint arXiv:2012.07828.
  19. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747.
  20. Privacy risk in machine learning: Analyzing the connection to overfitting. In 2018 IEEE 31st computer security foundations symposium (CSF), 268–282. IEEE.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Janvi Thakkar (6 papers)
  2. Giulio Zizzo (25 papers)
  3. Sergio Maffeis (14 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets