Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gradient Masking and the Underestimated Robustness Threats of Differential Privacy in Deep Learning (2105.07985v1)

Published 17 May 2021 in cs.CR, cs.AI, and cs.LG

Abstract: An important problem in deep learning is the privacy and security of neural networks (NNs). Both aspects have long been considered separately. To date, it is still poorly understood how privacy enhancing training affects the robustness of NNs. This paper experimentally evaluates the impact of training with Differential Privacy (DP), a standard method for privacy preservation, on model vulnerability against a broad range of adversarial attacks. The results suggest that private models are less robust than their non-private counterparts, and that adversarial examples transfer better among DP models than between non-private and private ones. Furthermore, detailed analyses of DP and non-DP models suggest significant differences between their gradients. Additionally, this work is the first to observe that an unfavorable choice of parameters in DP training can lead to gradient masking, and, thereby, results in a wrong sense of security.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Franziska Boenisch (41 papers)
  2. Philip Sperl (17 papers)
  3. Konstantin Böttinger (28 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.