Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Breaking the Black-Box: Confidence-Guided Model Inversion Attack for Distribution Shift (2402.18027v1)

Published 28 Feb 2024 in cs.CV

Abstract: Model inversion attacks (MIAs) seek to infer the private training data of a target classifier by generating synthetic images that reflect the characteristics of the target class through querying the model. However, prior studies have relied on full access to the target model, which is not practical in real-world scenarios. Additionally, existing black-box MIAs assume that the image prior and target model follow the same distribution. However, when confronted with diverse data distribution settings, these methods may result in suboptimal performance in conducting attacks. To address these limitations, this paper proposes a \textbf{C}onfidence-\textbf{G}uided \textbf{M}odel \textbf{I}nversion attack method called CG-MI, which utilizes the latent space of a pre-trained publicly available generative adversarial network (GAN) as prior information and gradient-free optimizer, enabling high-resolution MIAs across different data distributions in a black-box setting. Our experiments demonstrate that our method significantly \textbf{outperforms the SOTA black-box MIA by more than 49\% for Celeba and 58\% for Facescrub in different distribution settings}. Furthermore, our method exhibits the ability to generate high-quality images \textbf{comparable to those produced by white-box attacks}. Our method provides a practical and effective solution for black-box model inversion attacks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. Mirror: Model inversion for deep learning network with high fidelity. In Proceedings of the 29th Network and Distributed System Security Symposium, 2022.
  2. Gamin: An adversarial approach to black-box model inversion. Cornell University - arXiv,Cornell University - arXiv, 2019.
  3. Large scale gan training for high fidelity natural image synthesis. International Conference on Learning Representations, 2018.
  4. Knowledge-enriched distributional model inversion attacks. In Proceedings of the IEEE/CVF international conference on computer vision (CVPR), pages 16178–16187, 2021.
  5. Stargan v2: Diverse image synthesis for multiple domains. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
  6. Differential evolution: A survey of the state-of-the-art. IEEE Transactions on Evolutionary Computation, page 4–31, 2011.
  7. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
  8. Scalable global optimization via local bayesian optimization. Neural Information Processing Systems(NeurIPS), 2019.
  9. Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing. In 23rd {normal-{\{{USENIX}normal-}\}} Security Symposium ($U⁢S⁢E⁢N⁢I⁢Xcurrency-dollar𝑈𝑆𝐸𝑁𝐼𝑋\$USENIX$ italic_U italic_S italic_E italic_N italic_I italic_X$ Security 14), pages 17–32, 2014.
  10. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, pages 1322–1333, 2015.
  11. Generative adversarial nets. Journal of Japan Society for Fuzzy Theory and Intelligent Informatics, page 177–177, 2017.
  12. Reinforcement learning-based black-box model inversion attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20504–20513, 2023.
  13. Nikolaus Hansen. The cma evolution strategy: A tutorial. Towards a new evolutionary computation, page 75–102, 2016.
  14. CMA-ES/pycma on Github. Zenodo, DOI:10.5281/zenodo.2559634, 2019.
  15. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  16. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Neural Information Processing Systems (NeurIPS), 2017.
  17. Densely connected convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  18. Label-only model inversion attacks via boundary repulsion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15045–15053, 2022.
  19. A style-based generator architecture for generative adversarial networks. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  20. Training generative adversarial networks with limited data. Neural Information Processing Systems,Neural Information Processing Systems, 2020a.
  21. Training generative adversarial networks with limited data. Cornell University - arXiv, 2020b.
  22. Analyzing and improving the image quality of stylegan. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020c.
  23. Novel dataset for fine-grained image categorization. In Conference on Computer Vision and Pattern Recognition (CVPR) Workshop, 2011.
  24. Privacy and security issues in deep learning: A survey. IEEE Access, 9:4566–4593, 2020.
  25. Deep learning face attributes in the wild. In 2015 IEEE International Conference on Computer Vision (ICCV), 2015.
  26. A data-driven approach to cleaning large face datasets. In 2014 IEEE International Conference on Image Processing (ICIP), 2014.
  27. Re-thinking model inversion attacks against deep neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16384–16393, 2023.
  28. Unsupervised representation learning with deep convolutional generative adversarial networks. International Conference on Learning Representations (ICLR), 2016.
  29. Black-box face recovery from identity features. In Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16, pages 462–475. Springer, 2020.
  30. A survey of privacy attacks in machine learning. Cornell University - arXiv, 2020a.
  31. A survey of privacy attacks in machine learning. Cornell University - arXiv,Cornell University - arXiv, 2020b.
  32. Facenet: A unified embedding for face recognition and clustering. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
  33. A survey of the implementations of model inversion attacks. In International Conference on Distributed Computer and Communication Networks, pages 3–16. Springer, 2022.
  34. Plug & play attacks: Towards robust and flexible model inversion attacks. In Proceedings of the 39th International Conference on Machine Learning (ICML), pages 20522–20545. PMLR, 2022.
  35. Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  36. Variational model inversion attacks. Advances in Neural Information Processing Systems, 34:9706–9719, 2021.
  37. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pages 8798–8807, 2018.
  38. A classification of location privacy attacks and approaches. Personal and ubiquitous computing, 18:163–175, 2014.
  39. Neural network inversion in adversarial setting via background knowledge alignment. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS ’19, page 225–240, 2019.
  40. Pseudo label-guided model inversion attack via conditional generative adversarial network. AAAI Conference on Artificial Intelligence(AAAI), 2023.
  41. The secret revealer: Generative model-inversion attacks against deep neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 253–261, 2020.

Summary

We haven't generated a summary for this paper yet.