Crafter: Facial Feature Crafting against Inversion-based Identity Theft on Deep Models (2401.07205v1)
Abstract: With the increased capabilities at the edge (e.g., mobile device) and more stringent privacy requirement, it becomes a recent trend for deep learning-enabled applications to pre-process sensitive raw data at the edge and transmit the features to the backend cloud for further processing. A typical application is to run ML services on facial images collected from different individuals. To prevent identity theft, conventional methods commonly rely on an adversarial game-based approach to shed the identity information from the feature. However, such methods can not defend against adaptive attacks, in which an attacker takes a countermove against a known defence strategy. We propose Crafter, a feature crafting mechanism deployed at the edge, to protect the identity information from adaptive model inversion attacks while ensuring the ML tasks are properly carried out in the cloud. The key defence strategy is to mislead the attacker to a non-private prior from which the attacker gains little about the private identity. In this case, the crafted features act like poison training samples for attackers with adaptive model updates. Experimental results indicate that Crafter successfully defends both basic and possible adaptive attacks, which can not be achieved by state-of-the-art adversarial game-based methods.
- “Amazon rekognition face verification api.” https://aws.amazon.com/rekognition/.
- “Microsoft azure face api.” https://azure.microsoft.com/en-us/services/cognitive-services/face/.
- “Our crafter.” https://github.com/ShimingWang98/Facial_Feature_Crafting_against_Inversion_based_Identity_Theft/tree/main.
- M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proc.of ACM SIGSAC, 2016.
- S. An, G. Tao, Q. Xu, Y. Liu, G. Shen, Y. Yao, J. Xu, and X. Zhang, “Mirror: Model inversion for deep learning network with high fidelity,” in Proc. of NDSS, 2022.
- M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in Proc.of ICML.
- Y. Blau and T. Michaeli, “The perception-distortion tradeoff,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 6228–6237.
- ——, “Rethinking lossy compression: The rate-distortion-perception tradeoff,” in International Conference on Machine Learning. PMLR, 2019, pp. 675–685.
- N. Carlini, S. Deng, S. Garg, S. Jha, S. Mahloujifar, M. Mahmoody, A. Thakurta, and F. Tramèr, “Is private learning possible with instance encoding?” in Proc.of 2021 IEEE S&P, 2021.
- J. Chen, L. Chen, C. Yu, and C. Lu, “Perceptual indistinguishability-net (pi-net): Facial image obfuscation with manipulable semantics,” in Proc.of CVPR, 2021.
- V. Cherepanova, M. Goldblum, H. Foley, S. Duan, J. Dickerson, G. Taylor, and T. Goldstein, “Lowkey: leveraging adversarial attacks to protect social media users from facial recognition,” in Proc.of ICLR, 2021.
- M. Dusmanu, J. L. Schönberger, S. N. Sinha, and M. Pollefeys, “Privacy-preserving image features via adversarial affine subspace embeddings.” in Proc.of CVPR, 2020.
- R. Flamary, N. Courty, A. Gramfort, M. Z. Alaya, A. Boisbunon, S. Chambon, L. Chapel, A. Corenflos, K. Fatras, N. Fournier, L. Gautheron, N. T. Gayraud, H. Janati, A. Rakotomamonjy, I. Redko, A. Rolet, A. Schutz, V. Seguy, D. J. Sutherland, R. Tavenard, A. Tong, and T. Vayer, “Pot: Python optimal transport,” in Journal of Machine Learning Research, vol. 22, no. 78, 2021, pp. 1–8.
- M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in Proc. of ACM SIGSAC, 2015.
- I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, “Improved training of wasserstein gans,” in Proc.of NIPS, 2017.
- Y. Huang, Z. Song, K. Li, and S. Arora, “Instahide: Instance-hiding schemes for private distributed learning,” in Proc.of ICML, 2020.
- A. Li, Y. Duan, H. Yang, Y. Chen, and J. Yang, “Tiprdc: task-independent privacy-respecting data crowdsourcing framework for deep learning with anonymized intermediate representations,” in Proc.of ACM SIGKDD, 2020.
- A. Li, J. Guo, H. Yang, F. D. Salim, and Y. Chen, “Deepobfuscator: Obfuscating intermediate representations with privacy-preserving adversarial learning on smartphones,” in Proc.of IOTDI, 2021.
- Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proc.of CVPR, 2015.
- J. Lorraine, P. Vicol, and D. Duvenaud, “Optimizing millions of hyperparameters by implicit differentiation,” in Proc.of AISTATS, 2020.
- D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” in Proc.of INT J COMPUT VISION, 2004.
- A. Mahendran and A. Vedaldi, “Understanding deep image representations by inverting them,” in Proc.of cs.CV, 2014.
- F. M. Orekondy T, Schiele B, “Knockoff nets: Stealing functionality of black-box models,” in Proc. of CVPR, 2019.
- T. F. Radiya-Dixit E, Sanghyun Hong, “Data poisoning won’t save you from facial recognition,” 2022.
- J. Ragan-Kelley, J. Lehtinen, J. Chen, M. Doggett, and F. Durand, “Decoupled sampling for graphics pipelines,” in ACM Transactions on Graphics (TOG), vol. 30, no. 3. ACM New York, NY, USA, 2011, pp. 1–17.
- F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in Proc.of CVPR, 2015.
- A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein, “Poison frogs! targeted clean-label poisoning attacks on neural networks,” in Proc.of NIPS, 2018.
- S. Shan, E. Wenger, J. Zhang, H. Li, H. Zheng, and B. Y. Zhao, “Fawkes: Protecting privacy against unauthorized deep learning models,” in Proc.of USENIX Security, 2020.
- A. Singh, A. Chopra, E. Garza, E. Zhang, P. Vepakomma, V. Sharma, and R. Raskar, “Disco: Dynamic and invariant sensitive channel obfuscation for deep neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 12 125–12 135.
- B. W. Tramer F, Carlini N, “On adaptive attacks to adversarial example defenses,” in Proc.of NIPS, 2020.
- B. Wang, F. Wu, Y. Long, L. Rimanic, C. Zhang, and B. Li, “Datalens: Scalable privacy preserving training via gradient compression and aggregation,” in Proc.of ACM SIGSAC, 2021.
- T. Wang, Y. Zhang, Y. Fan, J. Wang, and Q. Chen, “High-fidelity gan inversion for image attribute editing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022, pp. 11 379–11 388.
- T. Wang, Y. Zhang, and R. Jia, “Improving robustness to model inversion attacks via mutual information regularization.” in Proc.of AAAI, 2020.
- Z. Wang, S. Chang, Y. Yang, D. Liu, and T. S. Huang, “Studying very low resolution recognition using deep networks,” in Proc.of CVPR, 2016.
- K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farokhi, S. Jin, T. Q. Quek, and H. V. Poor, “Federated learning with differential privacy: Algorithms and performance analysis,” in IEEE Transactions on Information Forensics and Security, 2020.
- H. Wu, X. Tian, Y. Gong, X. Su, M. Li, and F. Xu, “Dapter: Preventing user data abuse in deep learning inferenceservices,” in Proc.of WWW, 2021.
- T. Xiao, Y.-H. Tsai, K. Sohn, M. Chandraker, and M.-H. Yang, “Adversarial learning of privacy-preserving and task-oriented representations,” in Proc.of AAAI, 2020.
- Y. Zhang, R. Jia, H. Pei, W. Wang, B. Li, and D. Song, “The secret revealer: Generative model-inversion attacks against deep neural networks,” in Proc.of cs.LG, 2020.
- L. Zheng, Y. Ning, A. Salem, M. Backes, M. Fritz, and Z. Yang, “Unganable: Defending against gan-based face manipulation,” in Proc.of USENIX Security, 2023.