Understanding How Explainability Can Support Human-AI Interaction
The paper "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction, presented at the CHI 2023 conference, explores the nuances of Explainable AI (XAI) from the perspective of end-users. Despite the proliferation of XAI methods, there is a palpable gap in effectively addressing the end-users' needs concerning AI explainability. This research aims to bridge this gap by delivering insights into user-specific requirements for enhancing human-AI interactions, particularly in real-world applications.
The paper focuses on the Merlin bird identification app as a case paper for real-world AI applications. Through a mixed-methods paper involving 20 participants, the researchers sought to understand the end-users' needs, uses, and perceptions concerning XAI explanations in this context. The participants were varied in their AI and birding expertise, ensuring a diverse set of perspectives toward explainability.
The key findings from this paper indicate that participants desired explanations that are practically useful and facilitate better collaboration with the AI system rather than just technical insights. Importantly, explanations that mirrored human reasoning processes, particularly those that are part-based such as concept-based and prototype-based explanations, resonated well with users, suggesting an affinity for explanations that align with intuitive human understanding.
Participants articulated a range of uses for XAI explanations beyond the mere understanding of AI outputs. They expressed intentions to use these explanations to calibrate their trust in AI, improve their skills in the related tasks, provide better inputs to the AI, and even give constructive feedback to developers for system improvement. Notably, the ability to use explanations to enhance collaboration with AI indicates a shift toward viewing AI as a teammate rather than a simplistic tool.
The paper posits several implications for future XAI development:
- Human-Centric Design: Emphasize designing explanations that align with human cognitive processes and enhance user interaction with AI systems.
- Actionable Insights: Provide explanations that offer clear and actionable recommendations for users, aiding them in making informed decisions about interacting with AI systems.
- User-Centric Evaluation: Continuously engage with end-users to evaluate and iterate on XAI methods, ensuring alignment with user needs and contexts.
In summary, this paper contributes significantly to the discourse on XAI by foregrounding the user's perspective, which is crucial for practical deployments. Addressing the identified needs will likely enhance the usability and acceptance of AI systems in everyday contexts. As AI systems become increasingly integrated into daily life, fostering a human-centered approach in the development of XAI will be pivotal in advancing human-AI collaboration in diverse domains.