Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Privacy Risks of Securing Machine Learning Models against Adversarial Examples (1905.10291v3)

Published 24 May 2019 in stat.ML, cs.CR, and cs.LG

Abstract: The arms race between attacks and defenses for machine learning models has come to a forefront in recent years, in both the security community and the privacy community. However, one big limitation of previous research is that the security domain and the privacy domain have typically been considered separately. It is thus unclear whether the defense methods in one domain will have any unexpected impact on the other domain. In this paper, we take a step towards resolving this limitation by combining the two domains. In particular, we measure the success of membership inference attacks against six state-of-the-art defense methods that mitigate the risk of adversarial examples (i.e., evasion attacks). Membership inference attacks determine whether or not an individual data record has been part of a model's training set. The accuracy of such attacks reflects the information leakage of training algorithms about individual members of the training set. Adversarial defense methods against adversarial examples influence the model's decision boundaries such that model predictions remain unchanged for a small area around each input. However, this objective is optimized on training data. Thus, individual data records in the training set have a significant influence on robust models. This makes the models more vulnerable to inference attacks. To perform the membership inference attacks, we leverage the existing inference methods that exploit model predictions. We also propose two new inference methods that exploit structural properties of robust models on adversarially perturbed data. Our experimental evaluation demonstrates that compared with the natural training (undefended) approach, adversarial defense methods can indeed increase the target model's risk against membership inference attacks.

Privacy Risks of Securing Machine Learning Models Against Adversarial Examples

This paper addresses a critical gap in the domains of machine learning security and privacy by analyzing the adverse impact of adversarial defenses on privacy, particularly concerning membership inference attacks. The research focuses on whether enhancing machine learning models to be robust against adversarial examples inadvertently increases the risk of membership inference, which determines if a specific data point was part of a model's training dataset. Such inference attacks pose significant privacy threats as they can extract sensitive information about individuals' presence in training datasets.

The paper evaluates six state-of-the-art adversarial defense techniques aimed at mitigating the risk of adversarial examples. The empirical findings from the paper reveal that these defenses, while strengthening models against adversarial perturbations, concurrently augment their susceptibility to membership inference attacks. Quantitatively, the models subjected to adversarial defense training showed an increased membership inference risk by as much as 4.5 times compared to models trained with standard, non-defense techniques.

Key outcomes are highlighted through rigorous experimental evaluations across diverse datasets including CIFAR10, Yale Face, and Fashion-MNIST, utilizing both empirical and verifiable defense methods such as PGD-based Adversarial Training and Interval Bound Propagation. The analysis employed multiple novel membership inference techniques leveraging model predictions under adversarially perturbed conditions and verified worst-case scenarios.

The paper underscores a central finding: the generalization gap, particularly in robustness accuracy between training and testing datasets, correlates with increased privacy leakage. Robust training methods inherently face challenges in fitting "virtual training points" within their adversarial constraints, thus amplifying the training data's influence on model predictions. Further analysis in the paper indicates that model size and capacity, coupled with the adversarial perturbation constraints, significantly impact privacy risks and robustness performance. Notably, larger model capacities and perturbation budgets lead to elevated risks of membership inference.

Implications of this research are profound, calling for careful balance between enhancing adversarial robustness and maintaining data privacy. The results accentuate the necessity of considering the privacy ramifications while implementing defenses against adversarial examples. Although techniques such as temperature scaling and regularization can mitigate some privacy risks, the paper emphasizes the need for future research to explore more effective strategies that harmonize robustness and privacy preservation without compromising either.

In conclusion, the research delineates an inherent conflict in current adversarial defense methods, suggesting that as these strategies continue to evolve, integrating robust privacy-preserving mechanisms remains an essential direction for subsequent advances in machine learning security.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Liwei Song (13 papers)
  2. Reza Shokri (46 papers)
  3. Prateek Mittal (129 papers)
Citations (217)