Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Should We Trust (X)AI? Design Dimensions for Structured Experimental Evaluations (2009.06433v1)

Published 14 Sep 2020 in cs.HC and cs.AI

Abstract: This paper systematically derives design dimensions for the structured evaluation of explainable artificial intelligence (XAI) approaches. These dimensions enable a descriptive characterization, facilitating comparisons between different study designs. They further structure the design space of XAI, converging towards a precise terminology required for a rigorous study of XAI. Our literature review differentiates between comparative studies and application papers, revealing methodological differences between the fields of machine learning, human-computer interaction, and visual analytics. Generally, each of these disciplines targets specific parts of the XAI process. Bridging the resulting gaps enables a holistic evaluation of XAI in real-world scenarios, as proposed by our conceptual model characterizing bias sources and trust-building. Furthermore, we identify and discuss the potential for future work based on observed research gaps that should lead to better coverage of the proposed model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Fabian Sperrle (5 papers)
  2. Mennatallah El-Assady (54 papers)
  3. Grace Guo (11 papers)
  4. Duen Horng Chau (109 papers)
  5. Alex Endert (40 papers)
  6. Daniel Keim (19 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.