Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mitigating Gender Bias in Face Recognition Using the von Mises-Fisher Mixture Model (2210.13664v3)

Published 24 Oct 2022 in cs.CV and cs.AI

Abstract: In spite of the high performance and reliability of deep learning algorithms in a wide range of everyday applications, many investigations tend to show that a lot of models exhibit biases, discriminating against specific subgroups of the population (e.g. gender, ethnicity). This urges the practitioner to develop fair systems with a uniform/comparable performance across sensitive groups. In this work, we investigate the gender bias of deep Face Recognition networks. In order to measure this bias, we introduce two new metrics, $\mathrm{BFAR}$ and $\mathrm{BFRR}$, that better reflect the inherent deployment needs of Face Recognition systems. Motivated by geometric considerations, we mitigate gender bias through a new post-processing methodology which transforms the deep embeddings of a pre-trained model to give more representation power to discriminated subgroups. It consists in training a shallow neural network by minimizing a Fair von Mises-Fisher loss whose hyperparameters account for the intra-class variance of each gender. Interestingly, we empirically observe that these hyperparameters are correlated with our fairness metrics. In fact, extensive numerical experiments on a variety of datasets show that a careful selection significantly reduces gender bias. The code used for the experiments can be found at https://github.com/JRConti/EthicalModule_vMF.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (54)
  1. Review on the effects of age, gender, and race demographics on automatic face recognition. The Visual Computer, 34(11):1617–1630, 2018.
  2. Measuring the gender and ethnicity bias in deep models for face recognition. In Iberoamerican Congress on Pattern Recognition, pp. 584–593. Springer, 2018.
  3. Toward fairness in face matching algorithms. In Proceedings of the 1st International Workshop on Fairness, Accountability, and Transparency in MultiMedia, pp.  19–25, 2019.
  4. How does gender balance in training data affect face recognition accuracy? arXiv preprint arXiv:2002.02934, 2020.
  5. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021.
  6. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency, pp.  77–91, 2018.
  7. The zoo of fairness metrics in machine learning. arXiv preprint arXiv:2106.00467, 2021.
  8. Fairness in machine learning: A survey. arXiv preprint arXiv:2010.04053, 2020.
  9. Mobilefacenets: Efficient cnns for accurate real-time face verification on mobile devices, 2018.
  10. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.  4690–4699, 2019a.
  11. Lightweight face recognition challenge. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp.  2638–2646, 2019b. doi: 10.1109/ICCVW.2019.00322.
  12. Retinaface: Single-stage dense face localisation in the wild. arXiv preprint arXiv:1905.00641, 2019c.
  13. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
  14. PASS: Protected attribute suppression system for mitigating bias in face recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp.  15087–15096, October 2021.
  15. Fairness in deep learning: A computational perspective. IEEE Intelligent Systems, 2020.
  16. Fairness metrics: A comparative analysis. In 2020 IEEE International Conference on Big Data (Big Data), pp.  3662–3666. IEEE, 2020.
  17. Jointly de-biasing face recognition and demographic attribute estimation. arXiv preprint arXiv:1911.08080, 2019.
  18. Mitigating face recognition bias via group adaptive classifier. arXiv preprint arXiv:2006.07576, 2020.
  19. Ongoing face recognition vendor test (frvt) part 3: Demographic effects. National Institute of Standards and Technology, Tech. Rep. NISTIR, 8280, 2019.
  20. Ms-celeb-1m: A dataset and benchmark for large-scale face recognition. In European conference on computer vision, pp.  87–102. Springer, 2016.
  21. Rethinking common assumptions to mitigate racial bias in face recognition datasets. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  4123–4132, 2021.
  22. Deep pyramidal residual networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017. doi: 10.1109/cvpr.2017.668. URL http://dx.doi.org/10.1109/CVPR.2017.668.
  23. von mises-fisher mixture model-based deep learning: Application to face verification. arXiv preprint arXiv:1706.04264, 2017.
  24. Deep imbalanced learning for face recognition and attribute prediction. IEEE transactions on pattern analysis and machine intelligence, 42(11):2781–2794, 2019.
  25. Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. 2008.
  26. Curricularface: adaptive curriculum learning loss for deep face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  5901–5910, 2020.
  27. Johansson, F. et al. mpmath: a Python library for arbitrary-precision floating-point arithmetic (version 1.2.0), February 2021. http://mpmath.org/.
  28. Kim, M. On pytorch implementation of density estimators for von mises-fisher and its mixture. arXiv preprint arXiv:2102.05340, 2021.
  29. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  30. Issues related to face recognition accuracy varying based on race and skin tone. IEEE Transactions on Technology and Society, 1(1):8–20, 2020.
  31. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25:1097–1105, 2012.
  32. Fair loss: margin-aware reinforcement learning for deep face recognition. In Proceedings of the IEEE international conference on computer vision, pp.  10052–10061, 2019.
  33. Sphereface: Deep hypersphere embedding for face recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  212–220, 2017.
  34. Deep face recognition: A survey. In 2018 31st SIBGRAPI conference on graphics, patterns and images (SIBGRAPI), pp.  471–478. IEEE, 2018.
  35. Iarpa janus benchmark-c: Face dataset and protocol. In 2018 International Conference on Biometrics (ICB), pp. 158–165. IEEE, 2018.
  36. A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635, 2019.
  37. Face recognition vendor test 2002. In 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443), pp.  44. IEEE, 2003.
  38. Face recognition: Too bias, or not too bias? In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp.  1–10, 2020. doi: 10.1109/CVPRW50498.2020.00008.
  39. Bias mitigation of face recognition models through calibration. arXiv preprint arXiv:2106.03761, 2021.
  40. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  815–823, 2015.
  41. Deep learning face representation from predicting 10,000 classes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  1891–1898, 2014.
  42. Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  1701–1708, 2014.
  43. Post-comparison mitigation of demographic bias in face recognition using fair score normalization. Pattern Recognition Letters, 140:332–338, 2020.
  44. Normface: L2 hypersphere embedding for face verification. In Proceedings of the 25th ACM international conference on Multimedia, pp.  1041–1049, 2017.
  45. Cosface: Large margin cosine loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.  5265–5274, 2018.
  46. Deep face recognition: A survey. arXiv preprint arXiv:1804.06655, 2018.
  47. Mitigating bias in face recognition using skewness-aware reinforcement learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  9322–9331, 2020.
  48. Racial faces in the wild: Reducing racial bias by information maximization adaptation network. In Proceedings of the IEEE International Conference on Computer Vision, pp.  692–702, 2019a.
  49. Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations. In Proceedings of the IEEE International Conference on Computer Vision, pp.  5310–5319, 2019b.
  50. Iarpa janus benchmark-b face dataset. In proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp.  90–98, 2017.
  51. Feature transfer learning for face recognition with under-represented data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.  5704–5713, 2019.
  52. Adaptive parameters softmax loss for deep face recognition. In 2019 IEEE 5th International Conference on Computer and Communications (ICCC), pp.  1680–1684. IEEE, 2019a.
  53. Adacos: Adaptively scaling cosine logits for effectively learning deep face representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.  10823–10832, 2019b.
  54. Directional statistics-based deep metric learning for image classification and retrieval. Pattern Recognition, 93:113–123, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jean-Rémy Conti (3 papers)
  2. Nathan Noiry (19 papers)
  3. Vincent Despiegel (3 papers)
  4. Stéphane Gentric (5 papers)
  5. Stéphan Clémençon (70 papers)
Citations (9)