Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Membership Leakage in Label-Only Exposures (2007.15528v3)

Published 30 Jul 2020 in cs.LG, cs.CR, and stat.ML

Abstract: Machine learning (ML) has been widely adopted in various privacy-critical applications, e.g., face recognition and medical image analysis. However, recent research has shown that ML models are vulnerable to attacks against their training data. Membership inference is one major attack in this domain: Given a data sample and model, an adversary aims to determine whether the sample is part of the model's training set. Existing membership inference attacks leverage the confidence scores returned by the model as their inputs (score-based attacks). However, these attacks can be easily mitigated if the model only exposes the predicted label, i.e., the final model decision. In this paper, we propose decision-based membership inference attacks and demonstrate that label-only exposures are also vulnerable to membership leakage. In particular, we develop two types of decision-based attacks, namely transfer attack, and boundary attack. Empirical evaluation shows that our decision-based attacks can achieve remarkable performance, and even outperform the previous score-based attacks in some cases. We further present new insights on the success of membership inference based on quantitative and qualitative analysis, i.e., member samples of a model are more distant to the model's decision boundary than non-member samples. Finally, we evaluate multiple defense mechanisms against our decision-based attacks and show that our two types of attacks can bypass most of these defenses.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Zheng Li (326 papers)
  2. Yang Zhang (1129 papers)
Citations (222)

Summary

Evaluating Membership Leakage in Label-Only Scenarios

The paper, authored by Zheng Li and Yang Zhang, explores a pertinent issue in ML security: membership inference attacks. Particularly, it scrutinizes these attacks in the context of label-only exposure, a scenario where the adversary only accesses the output label of a model rather than detailed confidence scores. This research is crucial given the widespread adoption of ML models in privacy-sensitive domains such as facial recognition and medical image processing, where training data privacy is paramount.

Key Contributions

  1. Introduction of Decision-Based Attacks: The paper presents two novel decision-based membership inference attacks: transfer attack and boundary attack. These methods only rely on the top-1 predicted label from the target model, marking a shift from traditional score-based attacks which utilize confidence scores.
  2. Transfer Attack Methodology: The transfer attack leverages a shadow dataset assumed to be from the same distribution as the target model's training data. This dataset is relabeled using the target model's predictions and utilized to train a local shadow model. The adversary then simulates membership inference by evaluating the shadow model's behavior on test samples, drawing parallels between shadow model behavior and target model data membership.
  3. Boundary Attack Methodology: Unlike transfer attacks, boundary attacks do not require a shadow dataset. Adversaries iteratively perturb candidate inputs until the model’s decision changes, measuring the perturbation's magnitude to infer membership. This method exploits the observation that member samples are typically further from the decision boundary compared to non-member samples.
  4. Experimental Evaluation and Insights: Extensive experiments demonstrate the robustness and effectiveness of these attacks, often surpassing traditional score-based attacks in certain configurations. The research includes an assessment across multiple benchmark datasets such as CIFAR-10, CIFAR-100, GTSRB, and Face datasets, illustrating the versatility and applicability of the proposed attacks.
  5. Defensive Postures Against Attacks: The paper also evaluates existing defense mechanisms against these innovative attacks. It concludes that while traditional defenses like differential privacy and data perturbation mitigate some risks, they are not wholly effective against label-only attacks unless coupled with heavy regularization, which can degrade model performance.

Implications and Future Directions

The implications of this paper are significant. It highlights a previously underestimated vector for membership inference that could exploit label-only interfaces of ML models deployed in real-world scenarios. This necessitates a reconsideration of privacy definitions and defenses in current ML systems. For practitioners, it underscores the importance of cautious use of ML outputs in sensitive applications.

In terms of theoretical advancements, the paper lays a foundation for future research in understanding the intricacies of decision boundary properties in deep learning models. The observation that members are often positioned farther from decision boundaries ignites potential discussions on model training dynamics tailored toward better membership leakage resilience.

Moving forward, potential areas of exploration include the integration of these attacks into comprehensive multi-faceted adversarial frameworks and developing innovative defenses beyond perturbation-based methodologies. Additionally, as the field of ML privacy progresses, the community must remain vigilant to evolving attack strategies that bypass established countermeasures.

In sum, this work provides a valuable contribution to the security and privacy landscape of ML, addressing the nuances of label-only scenarios with rigor and foresight.