Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Principles of Explanation in Human-AI Systems (2102.04972v1)

Published 9 Feb 2021 in cs.AI

Abstract: Explainable Artificial Intelligence (XAI) has re-emerged in response to the development of modern AI and ML systems. These systems are complex and sometimes biased, but they nevertheless make decisions that impact our lives. XAI systems are frequently algorithm-focused; starting and ending with an algorithm that implements a basic untested idea about explainability. These systems are often not tested to determine whether the algorithm helps users accomplish any goals, and so their explainability remains unproven. We propose an alternative: to start with human-focused principles for the design, testing, and implementation of XAI systems, and implement algorithms to serve that purpose. In this paper, we review some of the basic concepts that have been used for user-centered XAI systems over the past 40 years of research. Based on these, we describe the "Self-Explanation Scorecard", which can help developers understand how they can empower users by enabling self-explanation. Finally, we present a set of empirically-grounded, user-centered design principles that may guide developers to create successful explainable systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Shane T. Mueller (5 papers)
  2. Elizabeth S. Veinott (1 paper)
  3. Robert R. Hoffman (4 papers)
  4. Gary Klein (3 papers)
  5. Lamia Alam (2 papers)
  6. Tauseef Mamun (1 paper)
  7. William J. Clancey (2 papers)
Citations (52)