Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Can Explainable AI Explain Unfairness? A Framework for Evaluating Explainable AI (2106.07483v1)

Published 14 Jun 2021 in cs.CY, cs.AI, and cs.LG

Abstract: Many ML models are opaque to humans, producing decisions too complex for humans to easily understand. In response, explainable artificial intelligence (XAI) tools that analyze the inner workings of a model have been created. Despite these tools' strength in translating model behavior, critiques have raised concerns about the impact of XAI tools as a tool for fairwashing by misleading users into trusting biased or incorrect models. In this paper, we created a framework for evaluating explainable AI tools with respect to their capabilities for detecting and addressing issues of bias and fairness as well as their capacity to communicate these results to their users clearly. We found that despite their capabilities in simplifying and explaining model behavior, many prominent XAI tools lack features that could be critical in detecting bias. Developers can use our framework to suggest modifications needed in their toolkits to reduce issues likes fairwashing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Kiana Alikhademi (1 paper)
  2. Brianna Richardson (3 papers)
  3. Emma Drobina (1 paper)
  4. Juan E. Gilbert (3 papers)
Citations (30)