Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Breaking the Global North Stereotype: A Global South-centric Benchmark Dataset for Auditing and Mitigating Biases in Facial Recognition Systems (2407.15810v2)

Published 22 Jul 2024 in cs.CV and cs.CY

Abstract: Facial Recognition Systems (FRSs) are being developed and deployed globally at unprecedented rates. Most platforms are designed in a limited set of countries but deployed in worldwide, without adequate checkpoints. This is especially problematic for Global South countries which lack strong legislation to safeguard persons facing disparate performance of these systems. A combination of unavailability of datasets, lack of understanding of FRS functionality and low-resource bias mitigation measures accentuate the problem. In this work, we propose a new face dataset composed of 6,579 unique male and female sportspersons from eight countries around the world. More than 50% of the dataset comprises individuals from the Global South countries and is demographically diverse. To aid adversarial audits and robust model training, each image has four adversarial variants, totaling over 40,000 images. We also benchmark five popular FRSs, both commercial and open-source, for the task of gender prediction (and country prediction for one of the open-source models as an example of red-teaming). Experiments on industrial FRSs reveal accuracies ranging from 98.2%--38.1%, with a large disparity between males and females in the Global South (max difference of 38.5%). Biases are also observed in all FRSs between females of the Global North and South (max difference of ~50%). Grad-CAM analysis identifies the nose, forehead and mouth as the regions of interest on one of the open-source FRSs. Utilizing this insight, we design simple, low-resource bias mitigation solutions using few-shot and novel contrastive learning techniques significantly improving the accuracy with disparity between males and females reducing from 50% to 1.5% in one of the settings. In the red-teaming experiment with the open-source Deepface model, contrastive learning proves more effective than simple fine-tuning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (53)
  1. HateProof: Are Hateful Meme Detection Systems Really Robust? In WWW ’23.
  2. Identifying ethnics of people through face recognition: A deep CNN approach. Scientific Programming, 2020: 1–7.
  3. Discrimination through optimization: How Facebook’s Ad delivery can lead to biased outcomes. Proceedings of the ACM on human-computer interaction, 3(CSCW): 1–30.
  4. Amazon. 2023. Amazon AWS Rekognition. https://aws.amazon.com/rekognition/faqs/. Accessed: 2023-04-01.
  5. Masked Face Recognition for Secure Authentication. arXiv:2008.11104.
  6. Bansal, V. 2022. Uber’s facial recognition is locking Indian drivers out of their accounts. https://www.technologyreview.com/2022/12/06/1064287/ubers-facial-recognition-is-locking-indian-drivers-out-of-their-accounts/. Accessed: 2023-04-01.
  7. Effective computer model for recognizing nationality from frontal image. arXiv preprint arXiv:1603.04550.
  8. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency, 77–91. PMLR.
  9. Content Recommendation System Based on Private Dynamic User Profile. In 2007 International Conference on Machine Learning and Cybernetics.
  10. Mitigating gender bias in face recognition using the von mises-fisher mixture model. In International Conference on Machine Learning, 4344–4369. PMLR.
  11. Personalized recommendation algorithm using user demography information. In 2009 Second International Workshop on Knowledge Discovery and Data Mining, 100–103. IEEE.
  12. Dizikes, P. 2023. How an “AI-tocracy” emerges. https://news.mit.edu/2023/how-ai-tocracy-emerges-0713. Accessed: 2023-08-08.
  13. Robustness Disparities in Face Detection. Advances in Neural Information Processing Systems, 35: 38245–38259.
  14. Age and gender estimation of unfiltered faces. IEEE Transactions on information forensics and security, 9(12): 2170–2179.
  15. Face++. 2023. Face++ Detect. https://www.faceplusplus.com/face-detection/. Accessed: 2023-04-01.
  16. Fitzpatrick, A. 2023. Facial recognition’s alarming pitfalls. https://www.axios.com/2023/01/07/facial-recognition-issues-problems. Accessed: 2023-05-05.
  17. for the Future, F. 2023. Ban Facial Recognition. https://www.banfacialrecognition.com/. Accessed: 2024-06-01.
  18. Jointly de-biasing face recognition and demographic attribute estimation. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIX 16, 330–347. Springer.
  19. Google. 2023. Search. https://www.google.com. Accessed: 2023-06-01.
  20. Two-face: Adversarial audit of commercial face recognition systems. In Proceedings of the International AAAI Conference on Web and Social Media, volume 16, 381–392.
  21. ultralytics/yolov5: v7.0 - YOLOv5 SOTA Realtime Instance Segmentation.
  22. Toward development of a face recognition system for watchlist surveillance. IEEE TPAMI.
  23. FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age for Bias Measurement and Mitigation. In IEEE/CVF WACV.
  24. Keyes, O. 2018. The misgendering machines: Trans/HCI implications of automatic gender recognition. Proceedings of the ACM on human-computer interaction, 2(CSCW): 1–22.
  25. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.
  26. Face off: The lawless growth of facial recognition in UK policing. https://bigbrotherwatch.org.uk/wp-content/uploads/2018/05/Face-Off-final-digital-1.pdf. Accessed: 2024-01-01.
  27. The India Face Set: International and Cultural Boundaries Impact Face Impressions and Perceptions of Category Membership. Frontiers in Psychology.
  28. The Chicago face database: A free stimulus set of faces and norming data. Behavior Research Methods.
  29. Chicago Face Database: Multiracial expansion. Behavior Research Methods.
  30. Unravelling the effect of image distortions for biased prediction of pre-trained face recognition models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3786–3795.
  31. A survey on bias and fairness in machine learning. ACM computing surveys (CSUR), 54(6): 1–35.
  32. Microsoft. 2023. Microsoft Azure Face. https://azure.microsoft.com/en-in/services/cognitive-services/face/. Accessed: 2023-04-01.
  33. Mozur, P. 2019. One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority. https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html. Accessed: 2024-01-01.
  34. NIST. 2020. Facial Recognition Technology (FRT). https://www.nist.gov/speech-testimony/facial-recognition-technology-frt-0. Accessed: 2024-01-01.
  35. Stereotypes, disproportions, and power asymmetries in the visual portrayal of migrants in ten countries: an interdisciplinary AI-based approach. Humanities and Social Sciences Communications.
  36. OpenAI. 2023. ChatGPT. https://chat.openai.com. Accessed: 2023-06-01.
  37. Deep Face Recognition. In British Machine Vision Conference.
  38. Parliament, E. 2023. EU AI Act: first regulation on artificial intelligence. https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence. Accessed: 2024-01-01.
  39. Saving face: Investigating the ethical concerns of facial recognition auditing. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 145–151.
  40. Learning discriminative aggregation network for video-based face recognition and person re-identification. IJCV.
  41. Deep expectation of real and apparent age from a single image without facial landmarks. International Journal of Computer Vision, 126(2-4): 144–157.
  42. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, 815–823.
  43. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, 618–626.
  44. HyperExtended LightFace: A Facial Attribute Analysis Framework. In 2021 International Conference on Engineering and Emerging Technologies (ICEET), 1–4. IEEE.
  45. Singh, P. 2023. Gender parity in sports will need focused planning & training. https://timesofindia.indiatimes.com/blogs/voices/gender-parity-in-sports-will-need-focused-planning-training/. Accessed: 2023-09-09.
  46. Sur, A. 2023. The startup behind DigiYatra reveals how it perfected the tool’s facial recognition algorithm. https://www.moneycontrol.com/news/business/startup/the-startup-behind-digiyatra-reveals-how-it-perfected-the-tools-facial-recognition-algorithm-10016211.html. Accessed: 2024-01-01.
  47. Umagat, R. 2023. Libfaceid. https://github.com/richmondu/libfaceid. Accessed: 2023-04-01.
  48. UN. 2022. UNCTAD Handbook of Statistics 2022. https://unctad.org/system/files/official-document/tdstat47˙en.pdf. Accessed: 2023-04-01.
  49. Mitigating bias in face recognition using skewness-aware reinforcement learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9322–9331.
  50. Meta balanced network for fair face recognition. IEEE transactions on pattern analysis and machine intelligence, 44(11): 8433–8448.
  51. Fairness-aware adversarial perturbation towards bias mitigation for deployed deep models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10379–10388.
  52. Wider face: A face detection benchmark. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5525–5533.
  53. Age Progression/Regression by Conditional Adversarial Autoencoder. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE.

Summary

We haven't generated a summary for this paper yet.