Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Are Explainability Tools Gender Biased? A Case Study on Face Presentation Attack Detection (2304.13419v2)

Published 26 Apr 2023 in cs.CV

Abstract: Face recognition (FR) systems continue to spread in our daily lives with an increasing demand for higher explainability and interpretability of FR systems that are mainly based on deep learning. While bias across demographic groups in FR systems has already been studied, the bias of explainability tools has not yet been investigated. As such tools aim at steering further development and enabling a better understanding of computer vision problems, the possible existence of bias in their outcome can lead to a chain of biased decisions. In this paper, we explore the existence of bias in the outcome of explainability tools by investigating the use case of face presentation attack detection. By utilizing two different explainability tools on models with different levels of bias, we investigate the bias in the outcome of such tools. Our study shows that these tools show clear signs of gender bias in the quality of their explanations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Marco Huber (25 papers)
  2. Meiling Fang (25 papers)
  3. Fadi Boutros (49 papers)
  4. Naser Damer (96 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.