Seeing is not Believing: An Identity Hider for Human Vision Privacy Protection (2307.00481v5)
Abstract: Massive captured face images are stored in the database for the identification of individuals. However, these images can be observed unintentionally by data managers, which is not at the will of individuals and may cause privacy violations. Existing protection schemes can maintain identifiability but slightly change the facial appearance, rendering it still susceptible to the visual perception of the original identity by data managers. In this paper, we propose an effective identity hider for human vision protection, which can significantly change appearance to visually hide identity while allowing identification for face recognizers. Concretely, the identity hider benefits from two specially designed modules: 1) The virtual face generation module generates a virtual face with a new appearance by manipulating the latent space of StyleGAN2. In particular, the virtual face has a similar parsing map to the original face, supporting other vision tasks such as head pose detection. 2) The appearance transfer module transfers the appearance of the virtual face into the original face via attribute replacement. Meanwhile, identity information can be preserved well with the help of the disentanglement networks. In addition, diversity and background preservation are supported to meet the various requirements. Extensive experiments demonstrate that the proposed identity hider achieves excellent performance on privacy protection and identifiability preservation.
- K. Yang, J. H. Yau, L. Fei-Fei, J. Deng, and O. Russakovsky, “A study of face obfuscation in imagenet,” in Proc. Int. Conf. Mach. Learn. PMLR, 2022, pp. 25 313–25 330.
- E. Y.-N. Sun, H.-C. Wu, C. Busch, S. C.-H. Huang, Y.-C. Kuan, and S. Y. Chang, “Efficient recoverable cryptographic mosaic technique by permutations,” IEEE Trans. Circuits. Syst. Video Technol., vol. 31, no. 1, pp. 112–125, 2021.
- J. Zhou and C.-M. Pun, “Personal privacy protection via irrelevant faces tracking and pixelation in video live streaming,” Trans. Inf. Forensics Security, vol. 16, pp. 1088–1103, 2021.
- H. Wu, X. Tian, M. Li, Y. Liu, G. Ananthanarayanan, F. Xu, and S. Zhong, “PECAM: privacy-enhanced video streaming and analytics via securely-reversible transformation,” in Proc. 27th Annu. Int. Conf. Mobile Comput. Netw., 2021, pp. 229–241.
- R. Zhao, Y. Zhang, R. Lan, Z. Hua, and Y. Xiang, “Heterogeneous and customized cost-efficient reversible image degradation for green IoT,” IEEE Internet Things J., vol. 10, no. 3, pp. 2630–2645, 2023.
- J. Cao, B. Liu, Y. Wen, R. Xie, and L. Song, “Personalized and invertible face de-identification by disentangled identity information manipulation,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 3334–3342.
- X. Gu, W. Luo, M. S. Ryoo, and Y. J. Lee, “Password-conditioned anonymization and deanonymization with face identity transformers,” in Proc. IEEE Int. Conf. Comput. Vis. Springer, 2020, pp. 727–743.
- J.-W. Chen, L.-J. Chen, C.-M. Yu, and C.-S. Lu, “Perceptual indistinguishability-net (PI-Net): Facial image obfuscation with manipulable semantics,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 6474–6483.
- Z. You, S. Li, Z. Qian, and X. Zhang, “Reversible privacy-preserving recognition,” in Proc. IEEE Int. Conf. Multimedia Expo. IEEE, 2021, pp. 1–6.
- L. Zhai, Q. Guo, X. Xie, L. Ma, Y. E. Wang, and Y. Liu, “A3GAN: attribute-aware anonymization networks for face de-identification,” in Proc. ACM Int. Conf. Multimedia, 2022, pp. 5303–5313.
- J. Ji, H. Wang, Y. Huang, J. Wu, X. Xu, S. Ding, S. Zhang, L. Cao, and R. Ji, “Privacy-preserving face recognition with learnable privacy budgets in frequency domain,” in Proc. Eur. Conf. Comput. Vis. Springer, 2022, pp. 475–491.
- J. Li, L. Han, R. Chen, H. Zhang, B. Han, L. Wang, and X. Cao, “Identity-preserving face anonymization via adaptively facial attributes obfuscation,” in Proc. ACM Int. Conf. Multimedia, 2021, pp. 3891–3899.
- L. Yuan, L. Liu, X. Pu, Z. Li, H. Li, and X. Gao, “PRO-Face: A generic framework for privacy-preserving recognizable obfuscation of face images,” in Proc. ACM Int. Conf. Multimedia, 2022, pp. 1661–1669.
- Z. Yuan, Z. You, S. Li, Z. Qian, X. Zhang, and A. Kot, “On generating identifiable virtual faces,” in Proc. ACM Int. Conf. Multimedia, 2022, pp. 1465–1473.
- T. Wang, Y. Zhang, S. Qi, R. Zhao, Z. Xia, and J. Weng, “Security and privacy on generative data in AIGC: A survey,” arXiv preprint arXiv:2309.09435, 2023.
- J. Li, L. Han, H. Zhang, X. Han, J. Ge, and X. Cao, “Learning disentangled representations for identity preserving surveillance face camouflage,” in Proc. Int. Conf. Pattern Recognit. IEEE, 2021, pp. 9748–9755.
- Y. Mi, Y. Huang, J. Ji, H. Liu, X. Xu, S. Ding, and S. Zhou, “Duetface: Collaborative privacy-preserving face recognition via channel splitting in the frequency domain,” in Proc. ACM Int. Conf. Multimedia, 2022, pp. 6755–6764.
- Y. Wang, J. Liu, M. Luo, L. Yang, and L. Wang, “Privacy-preserving face recognition in the frequency domain,” in Proc. AAAI Conf. Artif. Intell., vol. 36, no. 3, 2022, pp. 2558–2566.
- H. Wang, X. Wu, Z. Huang, and E. P. Xing, “High-frequency component helps explain the generalization of convolutional neural networks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 8684–8694.
- Y. Zhang, T. Wang, R. Zhao, W. Wen, and Y. Zhu, “RAPP: Reversible privacy preservation for various face attributes,” Trans. Inf. Forensics Security, vol. 18, pp. 3074–3087, 2023.
- C. Peng, S. Wan, Z. Miao, D. Liu, Y. Zheng, and N. Wang, “Anonym-recognizer: Relationship-preserving face anonymization and recognition,” in Proc. Int. Workshop Hum.-Centric Multimedia Anal, 2022, pp. 1–6.
- T. Wang, Y. Zhang, R. Zhao, W. Wen, and R. Lan, “Identifiable face privacy protection via virtual identity transformation,” IEEE Signal Processing Letters, vol. 30, pp. 773–777, 2023.
- I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Commun. ACM, vol. 63, no. 11, pp. 139–144, 2020.
- T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 4401–4410.
- T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila, “Analyzing and improving the image quality of stylegan,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 8110–8119.
- T. Karras, M. Aittala, S. Laine, E. Härkönen, J. Hellsten, J. Lehtinen, and T. Aila, “Alias-free generative adversarial networks,” Adv. Neural Inf. Process. Syst., vol. 34, pp. 852–863, 2021.
- Y. Shen, C. Yang, X. Tang, and B. Zhou, “InterFaceGAN: Interpreting the disentangled face representation learned by gans,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 4, pp. 2004–2018, 2022.
- H. Liang, X. Hou, and L. Shen, “SSflow: Style-guided neural spline flows for face image manipulation,” in Proc. ACM Int. Conf. Multimedia, 2021, pp. 79–87.
- W. Xia, Y. Zhang, Y. Yang, J.-H. Xue, B. Zhou, and M.-H. Yang, “GAN inversion: A survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 3, pp. 3121–3138, 2023.
- R. Abdal, Y. Qin, and P. Wonka, “Image2StyleGAN: How to embed images into the StyleGAN latent space?” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 4431–4440.
- O. Tov, Y. Alaluf, Y. Nitzan, O. Patashnik, and D. Cohen-Or, “Designing an encoder for StyleGAN image manipulation,” ACM Trans. Graph, vol. 40, no. 4, pp. 1–14, 2021.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770–778.
- A. Shrivastava, A. Gupta, and R. Girshick, “Training region-based object detectors with online hard example mining,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 761–769.
- J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “ArcFace: Additive angular margin loss for deep face recognition,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 4690–4699.
- O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proc. Int. Conf. Med. Image Comput.-Assist. Intervent. Springer, 2015, pp. 234–241.
- L. Li, J. Bao, H. Yang, D. Chen, and F. Wen, “Advancing high fidelity identity swapping for forgery detection,” in in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 5074–5083.
- Z. He, W. Zuo, M. Kan, S. Shan, and X. Chen, “AttGAN: Facial attribute editing by only changing what you want,” IEEE Trans. Image Process, vol. 28, no. 11, pp. 5464–5478, 2019.
- Q. Deng, Q. Li, J. Cao, Y. Liu, and Z. Sun, “Controllable multi-attribute editing of high-resolution face images,” Trans. Inf. Forensics Security, vol. 16, pp. 1410–1423, 2020.
- T. Wang, Y. Zhang, Y. Fan, J. Wang, and Q. Chen, “High-fidelity GAN inversion for image attribute editing,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 11 379–11 388.
- Y. Xu, B. Deng, J. Wang, Y. Jing, J. Pan, and S. He, “High-resolution face swapping via latent semantics disentanglement,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 7642–7651.
- E. Richardson, Y. Alaluf, O. Patashnik, Y. Nitzan, Y. Azar, S. Shapiro, and D. Cohen-Or, “Encoding in style: a StyleGAN encoder for image-to-image translation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 2287–2296.
- S. Chen, Y. Liu, X. Gao, and Z. Han, “MobileFaceNets: Efficient CNNs for accurate real-time face verification on mobile devices,” in Proc. Chin. Conf. Biometric Recognit. Springer, 2018, pp. 428–438.
- C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, inception-ResNet and the impact of residual connections on learning,” in Proc. AAAI Conf. Artif. Intell., vol. 31, no. 1, 2017.
- I. C. Duta, L. Liu, F. Zhu, and L. Shao, “Improved residual networks for image and video recognition,” in Proc. Int. Conf. Pattern Recognit, 2021, pp. 9415–9422.