Papers
Topics
Authors
Recent
2000 character limit reached

Inclusive normalization of face images to passport format (2312.14544v1)

Published 22 Dec 2023 in cs.CV and cs.AI

Abstract: Face recognition has been used more and more in real world applications in recent years. However, when the skin color bias is coupled with intra-personal variations like harsh illumination, the face recognition task is more likely to fail, even during human inspection. Face normalization methods try to deal with such challenges by removing intra-personal variations from an input image while keeping the identity the same. However, most face normalization methods can only remove one or two variations and ignore dataset biases such as skin color bias. The outputs of many face normalization methods are also not realistic to human observers. In this work, a style based face normalization model (StyleFNM) is proposed to remove most intra-personal variations including large changes in pose, bad or harsh illumination, low resolution, blur, facial expressions, and accessories like sunglasses among others. The dataset bias is also dealt with in this paper by controlling a pretrained GAN to generate a balanced dataset of passport-like images. The experimental results show that StyleFNM can generate more realistic outputs and can improve significantly the accuracy and fairness of face recognition systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. B. F. Klare, B. Klein, E. Taborsky, A. Blanton, J. Cheney, K. Allen, P. Grother, A. Mah, and A. K. Jain, “Pushing the frontiers of unconstrained face detection and recognition: Iarpa janus benchmark a,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1931–1939, 2015.
  2. J. Zhao, J. Xing, L. Xiong, S. Yan, and J. Feng, “Recognizing profile faces by imagining frontal view,” International Journal of Computer Vision, vol. 128, no. 2, pp. 460–478, 2020.
  3. X. Han, H. Yang, G. Xing, and Y. Liu, “Asymmetric joint gans for normalizing face illumination from a single image,” IEEE Transactions on Multimedia, vol. 22, no. 6, pp. 1619–1633, 2019.
  4. I. Adjabi, A. Ouahabi, A. Benzaoui, and A. Taleb-Ahmed, “Past, present, and future of face recognition: A review,” Electronics, vol. 9, no. 8, p. 1188, 2020.
  5. I. Masi, Y. Wu, T. Hassner, and P. Natarajan, “Deep face recognition: A survey,” in 2018 31st SIBGRAPI conference on graphics, patterns and images (SIBGRAPI), pp. 471–478, IEEE, 2018.
  6. X. Tu, J. Zhao, Q. Liu, W. Ai, G. Guo, Z. Li, W. Liu, and J. Feng, “Joint face image restoration and frontalization for recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 3, pp. 1285–1298, 2021.
  7. T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4401–4410, 2019.
  8. T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila, “Analyzing and improving the image quality of stylegan,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8110–8119, 2020.
  9. T. Karras, M. Aittala, J. Hellsten, S. Laine, J. Lehtinen, and T. Aila, “Training generative adversarial networks with limited data,” arXiv preprint arXiv:2006.06676, 2020.
  10. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE international conference on computer vision, pp. 2223–2232, 2017.
  11. Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, “Stargan: Unified generative adversarial networks for multi-domain image-to-image translation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8789–8797, 2018.
  12. Y. Choi, Y. Uh, J. Yoo, and J.-W. Ha, “Stargan v2: Diverse image synthesis for multiple domains,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8188–8197, 2020.
  13. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Communications of the ACM, vol. 63, no. 11, pp. 139–144, 2020.
  14. M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in International conference on machine learning, pp. 214–223, PMLR, 2017.
  15. A. Brock, J. Donahue, and K. Simonyan, “Large scale gan training for high fidelity natural image synthesis,” arXiv preprint arXiv:1809.11096, 2018.
  16. T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida, “Spectral normalization for generative adversarial networks,” arXiv preprint arXiv:1802.05957, 2018.
  17. T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of gans for improved quality, stability, and variation,” arXiv preprint arXiv:1710.10196, 2017.
  18. L. Tran, X. Yin, and X. Liu, “Disentangled representation learning gan for pose-invariant face recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1415–1424, 2017.
  19. J. Zhao, Y. Cheng, Y. Xu, L. Xiong, J. Li, F. Zhao, K. Jayashree, S. Pranata, S. Shen, J. Xing, et al., “Towards pose invariant face recognition in the wild,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2207–2216, 2018.
  20. Y. Hu, X. Wu, B. Yu, R. He, and Z. Sun, “Pose-guided photorealistic face rotation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8398–8406, 2018.
  21. W. Ma, X. Xie, C. Yin, and J. Lai, “Face image illumination processing based on generative adversarial nets,” in 2018 24th International Conference on Pattern Recognition (ICPR), pp. 2558–2563, IEEE, 2018.
  22. Y. Zhang, C. Hu, and X. Lu, “Il-gan: Illumination-invariant representation learning for single sample face recognition,” Journal of Visual Communication and Image Representation, vol. 59, pp. 501–513, 2019.
  23. K. Nagano, H. Luo, Z. Wang, J. Seo, J. Xing, L. Hu, L. Wei, and H. Li, “Deep face normalization,” ACM Transactions on Graphics (TOG), vol. 38, no. 6, pp. 1–16, 2019.
  24. Y. Qian, W. Deng, and J. Hu, “Unsupervised face normalization with extreme pose and expression in the wild,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9851–9858, 2019.
  25. Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman, “Vggface2: A dataset for recognising faces across pose and age,” in 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), pp. 67–74, IEEE, 2018.
  26. R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker, “Multi-pie,” Image and vision computing, vol. 28, no. 5, pp. 807–813, 2010.
  27. Y. Shen, C. Yang, X. Tang, and B. Zhou, “Interfacegan: Interpreting the disentangled face representation learned by gans,” IEEE transactions on pattern analysis and machine intelligence, 2020.
  28. X. Wu, R. He, Z. Sun, and T. Tan, “A light cnn for deep face representation with noisy labels,” IEEE Transactions on Information Forensics and Security, vol. 13, no. 11, pp. 2884–2896, 2018.
  29. C. Ding and D. Tao, “Pose-invariant face recognition with homography-based normalization,” Pattern Recognition, vol. 66, pp. 144–152, 2017.
  30. C. Xiong, X. Zhao, D. Tang, K. Jayashree, S. Yan, and T.-K. Kim, “Conditional convolutional neural network for modality-aware face recognition,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 3667–3675, 2015.
  31. R. Huang, S. Zhang, T. Li, and R. He, “Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis,” in Proceedings of the IEEE international conference on computer vision, pp. 2439–2448, 2017.
  32. J.-C. Chen, V. M. Patel, and R. Chellappa, “Unconstrained face verification using deep cnn features,” in 2016 IEEE winter conference on applications of computer vision (WACV), pp. 1–9, IEEE, 2016.
  33. X. Yin, X. Yu, K. Sohn, X. Liu, and M. Chandraker, “Towards large-pose face frontalization in the wild,” in Proceedings of the IEEE international conference on computer vision, pp. 3990–3999, 2017.
  34. H. Cao, S. Bernard, L. Heutte, and R. Sabourin, “Improve the performance of transfer learning without fine-tuning using dissimilarity-based multi-view learning for breast cancer histology images,” in Image Analysis and Recognition: 15th International Conference, ICIAR 2018, Póvoa de Varzim, Portugal, June 27–29, 2018, Proceedings 15, pp. 779–787, Springer, 2018.
  35. H. Cao, S. Bernard, R. Sabourin, and L. Heutte, “Random forest dissimilarity based multi-view learning for radiomics application,” Pattern Recognition, vol. 88, pp. 185–197, 2019.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.