Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Does a Deep Learning Model Architecture Impact Its Privacy? A Comprehensive Study of Privacy Attacks on CNNs and Transformers (2210.11049v3)

Published 20 Oct 2022 in cs.CR, cs.AI, cs.LG, and stat.ML

Abstract: As a booming research area in the past decade, deep learning technologies have been driven by big data collected and processed on an unprecedented scale. However, privacy concerns arise due to the potential leakage of sensitive information from the training data. Recent research has revealed that deep learning models are vulnerable to various privacy attacks, including membership inference attacks, attribute inference attacks, and gradient inversion attacks. Notably, the efficacy of these attacks varies from model to model. In this paper, we answer a fundamental question: Does model architecture affect model privacy? By investigating representative model architectures from convolutional neural networks (CNNs) to Transformers, we demonstrate that Transformers generally exhibit higher vulnerability to privacy attacks than CNNs. Additionally, we identify the micro design of activation layers, stem layers, and LN layers, as major factors contributing to the resilience of CNNs against privacy attacks, while the presence of attention modules is another main factor that exacerbates the privacy vulnerability of Transformers. Our discovery reveals valuable insights for deep learning models to defend against privacy attacks and inspires the research community to develop privacy-friendly model architectures.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (94)
  1. Deep Learning with Differential Privacy. In CCS, 2016.
  2. Computing receptive fields of convolutional neural networks. Distill, 2019.
  3. Layer Normalization. arXiv preprint arXiv:1607.06450, 2016.
  4. Are Transformers more robust than CNNs? In NeurIPS, 2021.
  5. Language Models are Few-Shot Learners. In NeurIPS, 2020.
  6. Network size and size of the weights in memorization with two-layers neural networks. In NeurIPS, 2020.
  7. Membership Inference Attacks From First Principles. In S&P, 2022.
  8. The Privacy Onion Effect: Memorization is Relative. In NeurIPS, 2022.
  9. GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models. In CCS, 2020.
  10. When Machine Unlearning Jeopardizes Privacy. In CCS, 2021.
  11. Label-Only Membership Inference Attacks. In ICML, 2021.
  12. Capacity and Trainability in Recurrent Neural Networks. In ICLR, 2017.
  13. ImageNet: A large-scale hierarchical image database. In CVPR, 2009.
  14. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT, 2019.
  15. Davit: Dual attention vision transformers. In ECCV, 2022.
  16. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In ICLR, 2021.
  17. When is a Convolutional Filter Easy to Learn? In ICLR, 2018.
  18. Inferring Sensitive Attributes from Model Explanations. In CIKM, 2022.
  19. Vitaly Feldman. Does learning require memorization? a short tale about a long tail. In STOC, 2020.
  20. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures. In CCS, 2015.
  21. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing. In USENIX Security, 2014.
  22. Inverting Gradients - How easy is it to break privacy in federated learning? NeurIPS, 2020.
  23. Learning One Convolutional Layer with Overlapping Patches. In ICML, 2018.
  24. GradViT: Gradient Inversion of Vision Transformers. In CVPR, 2022.
  25. LOGAN: Membership Inference Attacks Against Generative Models. PoPETs, 2019.
  26. Deep Residual Learning for Image Recognition. In CVPR, 2016.
  27. Quantifying and Mitigating Privacy Risks of Contrastive Learning. In CCS, 2021.
  28. Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation. In ECCV, 2020.
  29. Gaussian Error Linear Units (GELUs). arXiv preprint arXiv:1606.08415, 2016.
  30. Searching for MobileNetV3. In ICCV, 2019.
  31. Membership Inference Attacks on Machine Learning: A Survey. ACM Comput. Surv., 2022.
  32. Deep Networks with Stochastic Depth. In ECCV, 2016.
  33. Are Attribute Inference Attacks Just Imputation? In CCS, pages 1569–1582, 2022.
  34. Revisiting Membership Inference Under Realistic Assumptions. PoPETs, 2021.
  35. When Does Data Augmentation Help With Membership Inference Attacks? In ICML, 2021.
  36. A. Krizhevsky. Learning multiple layers of features from tiny images. 2009.
  37. ImageNet Classification with Deep Convolutional Neural Networks. In NeurIPS, 2012.
  38. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Computation, 1989.
  39. Minhyeok Lee. GELU Activation Function in Deep Learning: A Comprehensive Mathematical Analysis and Performance. arXiv preprint arXiv:2305.12073, 2023.
  40. Mvitv2: Improved multiscale vision transformers for classification and detection. In CVPR, 2022.
  41. Membership Leakage in Label-Only Exposures. In CCS, 2021.
  42. Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage. In CVPR, 2022.
  43. When Machine Learning Meets Privacy: A Survey and Outlook. ACM Comput. Surv., 2021.
  44. ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models. In USENIX Security, 2022.
  45. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In ICCV, 2021.
  46. A ConvNet for the 2020s. In CVPR, 2022.
  47. Deep Learning Face Attributes in the Wild. In ICCV, 2015.
  48. A Pragmatic Approach to Membership Inferences on Machine Learning Models. In EuroS&P, 2020.
  49. APRIL: Finding the Achilles’ Heel on Privacy for Vision Transformers. In CVPR, 2022.
  50. Understanding the Effective Receptive Field in Deep Convolutional Neural Networks. In NeurIPS, 2016.
  51. Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models. In USENIX Security, 2022.
  52. Exploiting Unintended Feature Leakage in Collaborative Learning. In S&P, 2019.
  53. Scalable Extraction of Training Data from (Production) Language Models. arXiv preprint arXiv:2311.17035, 2023.
  54. Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning. In S&P, 2019.
  55. Vision Transformers Are Robust Learners. AAAI, 2022.
  56. Designing Network Design Spaces. In CVPR, 2020.
  57. Do Vision Transformers See Like Convolutional Neural Networks? In NeurIPS, 2021.
  58. An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks. In NeurIPS, 2021.
  59. White-box vs Black-box: Bayes Optimal Strategies for Membership Inference. In ICML, 2019.
  60. ML-Leaks: Model and data independent membership inference attacks and defenses on machine learning models. In NDSS, 2019.
  61. Membership Privacy for Machine Learning Models Through Knowledge Transfer. AAAI, 2021.
  62. Membership Inference Attacks Against Machine Learning Models. In S&P, 2017.
  63. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv preprint arXiv:1409.1556, 2015.
  64. Information Leakage in Embedding Models. In CCS, 2020.
  65. Overlearning Reveals Sensitive Attributes. In ICLR, 2020.
  66. Systematic Evaluation of Privacy Risks of Machine Learning Models. In USENIX Security, 2021.
  67. How to train your ViT? Data, augmentation, and regularization in vision transformers. TMLR, 2022.
  68. Going deeper with convolutions. In CVPR, 2015.
  69. Rethinking the Inception Architecture for Computer Vision. In CVPR, 2016.
  70. EfficientNet: Rethinking model scaling for convolutional neural networks. In ICML, 2019.
  71. Training data-efficient image transformers & distillation through attention. In ICML, 2021.
  72. Going deeper with Image Transformers. In ICCV, 2021.
  73. Demystifying Membership Inference Attacks in Machine Learning as a Service. IEEE Transactions on Services Computing, 2019.
  74. Attention is All you Need. In NeurIPS, 2017.
  75. Can CNNs Be More Robust Than Transformers? In ICLR, 2023.
  76. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 2004.
  77. On the Importance of Difficulty Calibration in Membership Inference Attacks. In ICLR, 2022.
  78. Early Convolutions Help Transformers See Better. In NeurIPS, 2021.
  79. Aggregated Residual Transformations for Deep Neural Networks. In CVPR, 2017.
  80. Understanding and Improving Layer Normalization. In NeurIPS, 2019.
  81. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In NeurIPS, 2019.
  82. Enhanced Membership Inference Attacks against Machine Learning Models. In CCS, 2022.
  83. Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting. In CSF, 2018.
  84. See Through Gradients: Image Batch Recovery via GradInversion. In CVPR, 2021.
  85. Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet. In ICCV, 2021.
  86. How Does a Deep Learning Model Architecture Impact Its Privacy? A Comprehensive Study of Privacy Attacks on CNNs and Transformers. arXiv preprint arXiv:2210.11049, 2024.
  87. Label-Only Membership Inference Attacks and Defenses In Semantic Segmentation Models. IEEE Transactions on Dependable and Secure Computing, 2022.
  88. Visual privacy attacks and defenses in deep learning: A survey. Artif Intell Rev, 2022.
  89. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In CVPR, 2018.
  90. A Survey on Gradient Inversion: Attacks, Defenses and Future Directions. In IJCAI, 2022.
  91. On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models. In EuroS&P, 2021.
  92. iDLG: Improved Deep Leakage from Gradients. arXiv preprint arXiv:2001.02610, 2020.
  93. Deep Leakage from Gradients. NeurIPS, 2019.
  94. Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning. arXiv preprint arXiv:2009.04872, 2020.
Citations (3)

Summary

We haven't generated a summary for this paper yet.