Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Questioning the AI: Informing Design Practices for Explainable AI User Experiences (2001.02478v3)

Published 8 Jan 2020 in cs.HC, cs.AI, cs.LG, and cs.SE
Questioning the AI: Informing Design Practices for Explainable AI User Experiences

Abstract: A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic. While many recognize the necessity to incorporate explainability features in AI systems, how to address real-world user needs for understanding AI remains an open question. By interviewing 20 UX and design practitioners working on various AI products, we seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products. To do so, we develop an algorithm-informed XAI question bank in which user needs for explainability are represented as prototypical questions users might ask about the AI, and use it as a study probe. Our work contributes insights into the design space of XAI, informs efforts to support design practices in this space, and identifies opportunities for future XAI work. We also provide an extended XAI question bank and discuss how it can be used for creating user-centered XAI.

Toward Explainable AI User Experiences: An Expert Overview

The paper "Toward Explainable AI User Experiences" provides a comprehensive examination of the intersection between explainable AI (XAI) algorithms and user-centered design practices. The researchers, Liao, Gruen, and Miller, focus on understanding the practical needs of users for AI explainability by conducting interviews with 20 UX and design practitioners involved in developing AI products. This paper identifies gaps between current XAI algorithmic work and the creation of user-friendly explainable AI systems, contributing both to the design space of XAI and the future development of AI systems tailored to real-world applications.

Research Overview

The authors investigate the challenges faced by industry practitioners in creating explainable AI products. Through these interviews, they identify diverse motivations for explainability such as improving decision-making, enhancing trust, and adapting user interaction with AI systems. They propose an "XAI question bank," a novel approach that represents user needs for explainability as prototypical questions. This methodology facilitates understanding user priorities and guides the implementation of XAI features.

Key Findings

  1. Motivations for Explainability: The paper highlights that explainability in AI is driven by the need to gain insights, appropriately evaluate AI capabilities, adapt to AI interaction, and fulfill ethical responsibilities. Designing for these motivations requires understanding the end user's goals and the downstream actions they intend to take following AI explanations.
  2. Discrepancies Between Algorithmic and Human Explanations: There are inherent gaps between how AI explanations are generated algorithmically and how humans naturally explain. The paper emphasizes the necessity for explanations that align with human intuition, mirroring how domain experts articulate their reasoning.
  3. Challenges in Realizing Explainable AI Products: Practitioners face obstacles not only in the technical application of XAI algorithms but also in aligning these with broader system and business objectives. The paper notes a need for resources to sensitize design practitioners to XAI possibilities and encourage collaboration with data scientists.
  4. Variability of Explainability Needs: User needs for explainability vary widely based on several factors, including motivation, usage context, algorithm type, and user expertise. The paper argues that understanding these variables is crucial for situating appropriate explanation methods.

Implications and Future Directions

The research offers several insights into the future trajectory of XAI development and its adoption:

  • Interactive Explanations: Since effective human explanations tend to be contrastive and selective, the authors suggest a move toward interactive or conversational AI systems. This would allow users to engage dynamically with explanations and tailor them to their specific needs.
  • User-Centric XAI Frameworks: The creation of frameworks that map user questions to specific XAI methods can enhance product design by aligning technical capabilities with user expectations.
  • Design and Implementation Tools: Practitioners require tools and heuristics that bridge the gap between user needs and algorithmic solutions. Developing shared artifacts can facilitate more effective cross-disciplinary collaboration.
  • Ethical Considerations: There's a growing recognition of the ethical imperatives surrounding AI explainability. Transparency is not only a design choice but a fundamental responsibility to users and society at large.

Conclusion

The paper provides a significant contribution to understanding how to align XAI techniques with user requirements and suggests a multidisciplinary approach for future developments. It urges collaborative efforts between HCI practitioners and AI researchers to create frameworks and tools that cater to human-centered, explainable AI applications. This research forms a basis for more nuanced, context-sensitive XAI solutions that are responsive to diverse user needs across different domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Q. Vera Liao (49 papers)
  2. Daniel Gruen (60 papers)
  3. Sarah Miller (7 papers)
Citations (622)
Youtube Logo Streamline Icon: https://streamlinehq.com