Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sum of Group Error Differences: A Critical Examination of Bias Evaluation in Biometric Verification and a Dual-Metric Measure (2404.15385v1)

Published 23 Apr 2024 in cs.CV, cs.AI, and cs.CY

Abstract: Biometric Verification (BV) systems often exhibit accuracy disparities across different demographic groups, leading to biases in BV applications. Assessing and quantifying these biases is essential for ensuring the fairness of BV systems. However, existing bias evaluation metrics in BV have limitations, such as focusing exclusively on match or non-match error rates, overlooking bias on demographic groups with performance levels falling between the best and worst performance levels, and neglecting the magnitude of the bias present. This paper presents an in-depth analysis of the limitations of current bias evaluation metrics in BV and, through experimental analysis, demonstrates their contextual suitability, merits, and limitations. Additionally, it introduces a novel general-purpose bias evaluation measure for BV, the ``Sum of Group Error Differences (SEDG)''. Our experimental results on controlled synthetic datasets demonstrate the effectiveness of demographic bias quantification when using existing metrics and our own proposed measure. We discuss the applicability of the bias evaluation metrics in a set of simulated demographic bias scenarios and provide scenario-based metric recommendations. Our code is publicly available under \url{https://github.com/alaaobeid/SEDG}.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. Face regions impact recognition accuracy differently across demographics. In 2022 IEEE International Joint Conference on Biometrics (IJCB), pages 1–9, 2022.
  2. Biometric authentication: A review. International Journal of u-and e-Service, Science and Technology, 2(3):13–28, 2009.
  3. Demographic Effects in Facial Recognition and Their Dependence on Image Acquisition: An Evaluation of Eleven Commercial Systems. IEEE Transactions on Biometrics, Behavior, and Identity Science, 1(1):32–41, 2019.
  4. T. de Freitas Pereira and S. Marcel. Fairness in biometrics: A figure of merit to assess biometric verification systems. IEEE Transactions on Biometrics, Behavior, and Identity Science, 4(1):19–29, 2022.
  5. The harms of demographic bias in deep face recognition research. In 2019 International Conference on Biometrics (ICB), pages 1–6, 2019.
  6. Jointly de-biasing face recognition and demographic attribute estimation. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIX 16, pages 330–347. Springer, 2020.
  7. Face recognition vendor test part 3: Demographic effects. 2019-12-19 2019.
  8. Evaluating Proposed Fairness Models for Face Recognition Algorithms. In J.-J. Rousseau and B. Kapralos, editors, Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges, pages 431–447, Cham, 2023. Springer Nature Switzerland.
  9. The effect of broad and specific demographic homogeneity on the imposter distributions and false match rates in face recognition algorithm performance. In 2019 IEEE 10th International Conference on Biometrics Theory, Applications and Systems (BTAS), pages 1–8. IEEE, 2019.
  10. K. Kotwal and S. Marcel. Fairness index measures to evaluate bias in biometric recognition. In International Conference on Pattern Recognition, pages 479–493. Springer, 2022.
  11. Dataset bias exposed in face verification. IET Biometrics, 8(4):249–258, 2019.
  12. An Experimental Evaluation of Covariates Effects on Unconstrained Face Verification. IEEE Transactions on Biometrics, Behavior, and Identity Science, 1(1):42–55, Jan. 2019.
  13. IARPA Janus Benchmark - C: Face Dataset and Protocol. In 2018 International Conference on Biometrics (ICB), pages 158–165, Feb. 2018.
  14. Face Recognition: Too Bias, or Not Too Bias? In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 1–10, 2020.
  15. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815–823, 2015.
  16. Sensitive loss: Improving accuracy and fairness of face representations with discrimination-aware deep learning. Artificial Intelligence, 305:103682, Apr. 2022.
  17. Y. Shi and A. K. Jain. Docface: Matching id document photos to selfies. In 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS), pages 1–8, 2018.
  18. A comprehensive study on face recognition biases beyond demographics. IEEE Transactions on Technology and Society, 3(1):16–30, 2021.
  19. Characterizing the variability in face recognition accuracy relative to race. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 2278–2285, 2019.
  20. Fair face verification by using non-sensitive soft-biometric attributes. IEEE Access, 10:30168–30179, 2022.
  21. M. Wang and W. Deng. Mitigating bias in face recognition using skewness-aware reinforcement learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9319–9328, 2020.
  22. Racial Faces in the Wild: Reducing Racial Bias by Information Maximization Adaptation Network. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 692–702, 2019.

Summary

We haven't generated a summary for this paper yet.