Membership Inference Attacks on DNNs using Adversarial Perturbations (2307.05193v1)
Abstract: Several membership inference (MI) attacks have been proposed to audit a target DNN. Given a set of subjects, MI attacks tell which subjects the target DNN has seen during training. This work focuses on the post-training MI attacks emphasizing high confidence membership detection -- True Positive Rates (TPR) at low False Positive Rates (FPR). Current works in this category -- likelihood ratio attack (LiRA) and enhanced MI attack (EMIA) -- only perform well on complex datasets (e.g., CIFAR-10 and Imagenet) where the target DNN overfits its train set, but perform poorly on simpler datasets (0% TPR by both attacks on Fashion-MNIST, 2% and 0% TPR respectively by LiRA and EMIA on MNIST at 1% FPR). To address this, firstly, we unify current MI attacks by presenting a framework divided into three stages -- preparation, indication and decision. Secondly, we utilize the framework to propose two novel attacks: (1) Adversarial Membership Inference Attack (AMIA) efficiently utilizes the membership and the non-membership information of the subjects while adversarially minimizing a novel loss function, achieving 6% TPR on both Fashion-MNIST and MNIST datasets; and (2) Enhanced AMIA (E-AMIA) combines EMIA and AMIA to achieve 8% and 4% TPRs on Fashion-MNIST and MNIST datasets respectively, at 1% FPR. Thirdly, we introduce two novel augmented indicators that positively leverage the loss information in the Gaussian neighborhood of a subject. This improves TPR of all four attacks on average by 2.5% and 0.25% respectively on Fashion-MNIST and MNIST datasets at 1% FPR. Finally, we propose simple, yet novel, evaluation metric, the running TPR average (RTA) at a given FPR, that better distinguishes different MI attacks in the low FPR region. We also show that AMIA and E-AMIA are more transferable to the unknown DNNs (other than the target DNN) and are more robust to DP-SGD training as compared to LiRA and EMIA.
- A. Qayyum, J. Qadir, M. Bilal, and A. Al-Fuqaha, “Secure and robust machine learning for healthcare: A survey,” IEEE Reviews in Biomedical Engineering, vol. 14, pp. 156–180, 2020.
- R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in 2017 IEEE symposium on security and privacy (SP). IEEE, 2017, pp. 3–18.
- N. Carlini, S. Chien, M. Nasr, S. Song, A. Terzis, and F. Tramer, “Membership inference attacks from first principles,” in 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022, pp. 1897–1914.
- J. Ye, A. Maddi, S. K. Murakonda, V. Bindschaedler, and R. Shokri, “Enhanced membership inference attacks against machine learning models,” in Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022, pp. 3093–3106.
- N. Khalid, A. Qayyum, M. Bilal, A. Al-Fuqaha, and J. Qadir, “Privacy-preserving artificial intelligence in healthcare: Techniques and applications,” Computers in Biology and Medicine, p. 106848, 2023.
- F. Tramèr, R. Shokri, A. San Joaquin, H. Le, M. Jagielski, S. Hong, and N. Carlini, “Truth serum: Poisoning machine learning models to reveal their secrets,” in Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022, pp. 2779–2792.
- F. Boenisch, A. Dziedzic, R. Schuster, A. S. Shamsabadi, I. Shumailov, and N. Papernot, “Is federated learning a practical pet yet?” arXiv preprint arXiv:2301.04017, 2023.
- T. Nguyen, P. Lai, K. Tran, N. Phan, and M. T. Thai, “Active membership inference attack under local differential privacy in federated learning,” arXiv preprint arXiv:2302.12685, 2023.
- M. Jagielski, M. Nasr, C. Choquette-Choo, K. Lee, and N. Carlini, “Students parrot their teachers: Membership inference on model distillation,” arXiv preprint arXiv:2303.03446, 2023.
- S. Yeom, I. Giacomelli, M. Fredrikson, and S. Jha, “Privacy risk in machine learning: Analyzing the connection to overfitting,” in 2018 IEEE 31st computer security foundations symposium (CSF). IEEE, 2018, pp. 268–282.
- H. Jalalzai, E. Kadoche, R. Leluc, and V. Plassier, “Membership inference attacks via adversarial examples,” arXiv preprint arXiv:2207.13572, 2022.
- H. Ali, M. S. Khan, A. Al-Fuqaha, and J. Qadir, “Tamp-x: Attacking explainable natural language classifiers through tampered activations,” Computers & Security, vol. 120, p. 102791, 2022.
- F. Khalid, H. Ali, M. A. Hanif, S. Rehman, R. Ahmed, and M. Shafique, “Fadec: A fast decision-based attack for adversarial machine learning,” in 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020, pp. 1–8.
- H. Ali, M. S. Khan, A. AlGhadhban, M. Alazmi, A. Alzamil, K. Al-Utaibi, and J. Qadir, “All your fake detector are belong to us: evaluating adversarial robustness of fake-news detectors under black-box settings,” IEEE Access, vol. 9, pp. 81 678–81 692, 2021.
- M. A. Butt, A. Qayyum, H. Ali, A. Al-Fuqaha, and J. Qadir, “Towards secure private and trustworthy human-centric embedded machine learning: An emotion-aware facial recognition case study,” Computers & Security, vol. 125, p. 103058, 2023.
- S. Latif, A. Qayyum, M. Usama, J. Qadir, A. Zwitter, and M. Shahzad, “Caveat emptor: the risks of using big data for human development,” Ieee technology and society magazine, vol. 38, no. 3, pp. 82–90, 2019.
- Y. Zhang, G. Bai, M. A. P. Chamikara, M. Ma, L. Shen, J. Wang, S. Nepal, M. Xue, L. Wang, and J. Liu, “Agrevader: Poisoning membership inference against byzantine-robust federated learning,” in Proceedings of the ACM Web Conference 2023, 2023, pp. 2371–2382.
- Y. Chen, C. Shen, Y. Shen, C. Wang, and Y. Zhang, “Amplifying membership exposure via data poisoning,” Advances in Neural Information Processing Systems, vol. 35, pp. 29 830–29 844, 2022.
- M. Jagielski, S. Wu, A. Oprea, J. Ullman, and R. Geambasu, “How to combine membership-inference attacks on multiple updated machine learning models,” Proceedings on Privacy Enhancing Technologies, vol. 3, pp. 211–232, 2023.
- J. Tan, D. LeJeune, B. Mason, H. Javadi, and R. G. Baraniuk, “A blessing of dimensionality in membership inference through regularization,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2023, pp. 10 968–10 993.
- S. Rezaei, Z. Shafiq, and X. Liu, “Accuracy-privacy trade-off in deep ensemble: A membership inference perspective,” in 2023 IEEE Symposium on Security and Privacy (SP). IEEE Computer Society, 2022, pp. 1901–1918.
- Y. Liu, Z. Zhao, M. Backes, and Y. Zhang, “Membership inference attacks by exploiting loss trajectory,” in Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022, pp. 2085–2098.
- G. Del Grosso, H. Jalalzai, G. Pichler, C. Palamidessi, and P. Piantanida, “Leveraging adversarial examples to quantify membership information leakage,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10 399–10 409.
- X. Yuan and L. Zhang, “Membership inference attacks and defenses in neural network pruning,” in 31st USENIX Security Symposium (USENIX Security 22), 2022, pp. 4561–4578.
- C. A. Choquette-Choo, F. Tramer, N. Carlini, and N. Papernot, “Label-only membership inference attacks,” in International conference on machine learning. PMLR, 2021, pp. 1964–1974.
- N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in 2017 ieee symposium on security and privacy (sp). Ieee, 2017, pp. 39–57.
- A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” in International Conference on Learning Representations, 2018.
- F. Croce and M. Hein, “Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks,” in International conference on machine learning. PMLR, 2020, pp. 2206–2216.
- H. Ali, M. A. Butt, F. Filali, A. Al-Fuqaha, and J. Qadir, “Consistent valid physically-realizable adversarial attack against crowd-flow prediction models,” arXiv preprint arXiv:2303.02669, 2023.
- J. Cohen, E. Rosenfeld, and Z. Kolter, “Certified adversarial robustness via randomized smoothing,” in international conference on machine learning. PMLR, 2019, pp. 1310–1320.
- H. Ali, M. S. Khan, A. AlGhadhban, M. Alazmi, A. Alzamil, K. Al-utaibi, and J. Qadir, “Con-detect: Detecting adversarially perturbed natural language inputs to deep classifiers through holistic analysis,” Computers & Security, p. 103367, 2023.
- W. Nie, B. Guo, Y. Huang, C. Xiao, A. Vahdat, and A. Anandkumar, “Diffusion models for adversarial purification,” in International Conference on Machine Learning. PMLR, 2022, pp. 16 805–16 827.
- N. Carlini, F. Tramer, K. D. Dvijotham, L. Rice, M. Sun, and J. Z. Kolter, “(certified!!) adversarial robustness for free!” in The Eleventh International Conference on Learning Representations, 2022.
- B. Jayaraman, L. Wang, K. Knipmeyer, Q. Gu, and D. Evans, “Revisiting membership inference under realistic assumptions,” Proceedings on Privacy Enhancing Technologies, vol. 2021, no. 2, 2021.
- M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, 2015, pp. 1322–1333.
- Y. Zhang, R. Jia, H. Pei, W. Wang, B. Li, and D. Song, “The secret revealer: Generative model-inversion attacks against deep neural networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 253–261.
- S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, “Universal adversarial perturbations,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1765–1773.
- Hassan Ali (24 papers)
- Adnan Qayyum (25 papers)
- Ala Al-Fuqaha (82 papers)
- Junaid Qadir (110 papers)