Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Inclusive Face Recognition Through Synthetic Ethnicity Alteration (2405.01273v2)

Published 2 May 2024 in cs.CV and cs.AI

Abstract: Numerous studies have shown that existing Face Recognition Systems (FRS), including commercial ones, often exhibit biases toward certain ethnicities due to under-represented data. In this work, we explore ethnicity alteration and skin tone modification using synthetic face image generation methods to increase the diversity of datasets. We conduct a detailed analysis by first constructing a balanced face image dataset representing three ethnicities: Asian, Black, and Indian. We then make use of existing Generative Adversarial Network-based (GAN) image-to-image translation and manifold learning models to alter the ethnicity from one to another. A systematic analysis is further conducted to assess the suitability of such datasets for FRS by studying the realistic skin-tone representation using Individual Typology Angle (ITA). Further, we also analyze the quality characteristics using existing Face image quality assessment (FIQA) approaches. We then provide a holistic FRS performance analysis using four different systems. Our findings pave the way for future research works in (i) developing both specific ethnicity and general (any to any) ethnicity alteration models, (ii) expanding such approaches to create databases with diverse skin tones, (iii) creating datasets representing various ethnicities which further can help in mitigating bias while addressing privacy concerns.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. Y. Shi and A. Jain, “Probabilistic face embeddings,” in (ICCV), 2019.
  2. F. Boutros, N. Damer, F. Kirchbuchner, and A. Kuijper, “Elasticface: Elastic margin loss for deep face recognition,” in CVPR Workshops, 2022, pp. 1578–1587.
  3. G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” University of Massachusetts, Amherst, Tech. Rep. 07-49, 2007.
  4. B. F. Klare, B. Klein, E. Taborsky, A. Blanton, J. Cheney, K. Allen, P. Grother, A. Mah, M. Burge, and A. K. Jain, “Pushing the frontiers of unconstrained face detection and recognition: Iarpa janus benchmark a,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1931–1939.
  5. S. Moschoglou, A. Papaioannou, C. Sagonas, J. Deng, I. Kotsia, and S. Zafeiriou, “Agedb: the first manually collected, in-the-wild age database,” in CVPRW, vol. 2, no. 3, 2017, p. 5.
  6. P. K. Chandaliya and N. Nain, “Conditional perceptual adversarial variational autoencoder for age progression and regression on child face,” in International Conference on Biometrics, 2019, pp. 1–8.
  7. T.-Y. Yang, Y.-H. Huang, Y.-Y. Lin, P.-C. Hsiu, and Y.-Y. Chuang, “Ssr-net: A compact soft stagewise regression network for age estimation.”   International Joint Conferences on Artificial Intelligence Organization, 7 2018, pp. 1078–1084.
  8. P. K. Chandaliya, Z. Akhtar, and N. Nain, “Longitudinal analysis of mask and no-mask on child face recognition,” in ICVGIP.   Association for Computing Machinery, 2023.
  9. S. Gong, X. Liu, and A. K. Jain, “Jointly de-biasing face recognition and demographic attribute estimation,” in Computer Vision – ECCV 2020, 2020, pp. 330–347.
  10. B. F. Klare, M. J. Burge, J. C. Klontz, R. W. Vorder Bruegge, and A. K. Jain, “Face recognition performance: Role of demographic information,” IEEE Transactions on Information Forensics and Security, vol. 7, no. 6, pp. 1789–1801, 2012.
  11. H. Wu, V. Albiero, K. S. Krishnapriya, M. C. King, and K. W. Bowyer, “Face recognition accuracy across demographics: Shining a light into the problem,” in CVPRW, 2023, pp. 1041–1050.
  12. M. Wang, W. Deng, J. Hu, X. Tao, and Y. Huang, “Racial faces in the wild: Reducing racial bias by information maximization adaptation network,” in (ICCV), October 2019.
  13. B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, , K. Ni, D. Poland, D. Borth, and L.-J. Li, “Yfcc100m: The new data in multimedia research,” in Communications of the ACM, 2016, pp. 64–73.
  14. P. K. Chandaliya, V. Kumar, M. Harjani, and N. Nain, “Scdae: Ethnicity and gender alteration on clf and utkface dataset,” in Computer Vision and Image Processing, 2020, pp. 294–306.
  15. P. Terhörst, D. Fährmann, J. N. Kolf, N. Damer, F. Kirchbuchner, and A. Kuijper, “Maad-face: A massively annotated attribute dataset for face images,” IEEE Trans. Inf. Fore. Secur., vol. 16, pp. 3942–3957, 2021.
  16. M. Kolla and A. Savadamuthu, “The impact of racial distribution in training data on face recognition bias: A closer look,” in 2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW), 2023, pp. 313–322.
  17. P. Melzi, C. Rathgeb, R. Tolosana, R. Vera-Rodriguez, A. Morales, D. Lawatsch, F. Domin, and M. Schaubert, “Synthetic data for the mitigation of demographic biases in face recognition,” 2024.
  18. P. Voigt and A. von dem Bussche, “The eu general data protection regulation (gdpr),” A Practical Guide, 1st Ed, Springer International Publishing, 2017.
  19. K. Gina, “Doctor says michael jackson has a skin disease,” The New York Times, 1993.
  20. K. Daniel, “Search of michael jackson’s home revealed skin-whitening creams,” Rolling Stone, 2010.
  21. F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in CVPR, 2015.
  22. S. G. Young, K. Hugenberg, M. J. Bernstein, and D. F. Sacco, “Perception and motivation in face recognition: a critical review of theories of the cross-race effect. personality and social psychology review,” in Society for Pers. and Social Psyc., vol. 16, no. 2, 2012, p. 116–142.
  23. P. Grother, M. Ngan, and K. Hanaoka, “Face recognition vendor test (frvt) part 3 demographic effects,” in National Institute of Standards and Technology, 2019.
  24. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” 2020.
  25. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” 2018.
  26. Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, “Stargan: Unified generative adversarial networks for multi-domain image-to-image translation,” in CVPR, 2018.
  27. J. Ge, W. Deng, M. Wang, and J. Hu, “Fgan: Fan-shaped gan for racial transformation,” in IJCB, 2020, pp. 1–7.
  28. B.-C. Chen, C.-S. Chen, and W. H. Hsu, “Cross-age reference coding for age-invariant face recognition and retrieval,” in Computer Vision – ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Eds., 2014, pp. 768–783.
  29. T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in CVPR, 2019, pp. 4396–4405.
  30. Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in ICCV, December 2015.
  31. K. Ricanek and T. Tesafaye, “Morph: a longitudinal image database of normal adult age-progression,” in 7th International Conference on Automatic Face and Gesture Recognition, 2006, pp. 341–345.
  32. Y. Sun, X. Wang, and X. Tang, “Deep learning face representation from predicting 10,000 classes,” in CVPR, 2014, pp. 1891–1898.
  33. Y. Zhang, L. Liu, C. Li, and C. C. Loy, “Quantifying facial age by posterior of age comparisons,” in (BMVC), 2017.
  34. D. Yi, Z. Lei, S. Liao, and S. Li, “Learning face representation from scratch,” ArXiv, vol. abs/1411.7923, 2014.
  35. S. I. Serengil and A. Ozpinar, “Hyperextended lightface: A facial attribute analysis framework,” in 2021 International Conference on Engineering and Emerging Technologies.   IEEE, 2021, pp. 1–4.
  36. K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint face detection and alignment using multitask cascaded convolutional networks,” IEEE Signal Process. Lett., vol. 23, no. 10, pp. 1499–1503, 2016.
  37. Q. Meng, S. Zhao, Z. Huang, and F. Zhou, “MagFace: A universal representation for face recognition and quality assessment,” 2021.
  38. P. K. Chandaliya, K. Raja, R. Ramachandra, and C. Busch, “Unified face image quality score based on isoiec quality components,” in 2023 International Conference of the Biometrics Special Interest Group (BIOSIG), 2023, pp. 1–6.
  39. T. Schlett, C. Rathgeb, O. Henniger, J. Galbally, J. Fierrez, and C. Busch, “Face image quality assessment: A literature survey,” Association for Computing Machinery, vol. 54, 2022.
  40. ISOIEC, “Iso/iec 29794-5 information technology — biometric sample quality — part 5: Face image data,” ISO/IEC CD 29794-5, 2023.
  41. A. Chardon, I. Cretois, and C. Hourseau, “Skin colour typology and suntanning pathways,” Int J Cosmet Sci, vol. 13, pp. 191–208, 1991.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets