Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction (2210.03735v2)

Published 2 Oct 2022 in cs.HC, cs.AI, cs.CV, and cs.CY

Abstract: Despite the proliferation of explainable AI (XAI) methods, little is understood about end-users' explainability needs and behaviors around XAI explanations. To address this gap and contribute to understanding how explainability can support human-AI interaction, we conducted a mixed-methods study with 20 end-users of a real-world AI application, the Merlin bird identification app, and inquired about their XAI needs, uses, and perceptions. We found that participants desire practically useful information that can improve their collaboration with the AI, more so than technical system details. Relatedly, participants intended to use XAI explanations for various purposes beyond understanding the AI's outputs: calibrating trust, improving their task skills, changing their behavior to supply better inputs to the AI, and giving constructive feedback to developers. Finally, among existing XAI approaches, participants preferred part-based explanations that resemble human reasoning and explanations. We discuss the implications of our findings and provide recommendations for future XAI design.

Understanding How Explainability Can Support Human-AI Interaction

The paper "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction, presented at the CHI 2023 conference, explores the nuances of Explainable AI (XAI) from the perspective of end-users. Despite the proliferation of XAI methods, there is a palpable gap in effectively addressing the end-users' needs concerning AI explainability. This research aims to bridge this gap by delivering insights into user-specific requirements for enhancing human-AI interactions, particularly in real-world applications.

The paper focuses on the Merlin bird identification app as a case paper for real-world AI applications. Through a mixed-methods paper involving 20 participants, the researchers sought to understand the end-users' needs, uses, and perceptions concerning XAI explanations in this context. The participants were varied in their AI and birding expertise, ensuring a diverse set of perspectives toward explainability.

The key findings from this paper indicate that participants desired explanations that are practically useful and facilitate better collaboration with the AI system rather than just technical insights. Importantly, explanations that mirrored human reasoning processes, particularly those that are part-based such as concept-based and prototype-based explanations, resonated well with users, suggesting an affinity for explanations that align with intuitive human understanding.

Participants articulated a range of uses for XAI explanations beyond the mere understanding of AI outputs. They expressed intentions to use these explanations to calibrate their trust in AI, improve their skills in the related tasks, provide better inputs to the AI, and even give constructive feedback to developers for system improvement. Notably, the ability to use explanations to enhance collaboration with AI indicates a shift toward viewing AI as a teammate rather than a simplistic tool.

The paper posits several implications for future XAI development:

  1. Human-Centric Design: Emphasize designing explanations that align with human cognitive processes and enhance user interaction with AI systems.
  2. Actionable Insights: Provide explanations that offer clear and actionable recommendations for users, aiding them in making informed decisions about interacting with AI systems.
  3. User-Centric Evaluation: Continuously engage with end-users to evaluate and iterate on XAI methods, ensuring alignment with user needs and contexts.

In summary, this paper contributes significantly to the discourse on XAI by foregrounding the user's perspective, which is crucial for practical deployments. Addressing the identified needs will likely enhance the usability and acceptance of AI systems in everyday contexts. As AI systems become increasingly integrated into daily life, fostering a human-centered approach in the development of XAI will be pivotal in advancing human-AI collaboration in diverse domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sunnie S. Y. Kim (16 papers)
  2. Elizabeth Anne Watkins (11 papers)
  3. Olga Russakovsky (62 papers)
  4. Ruth Fong (21 papers)
  5. Andrés Monroy-Hernández (75 papers)
Citations (81)
Youtube Logo Streamline Icon: https://streamlinehq.com