Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human-Centered Explainable AI (XAI): From Algorithms to User Experiences (2110.10790v5)

Published 20 Oct 2021 in cs.AI and cs.HC

Abstract: In recent years, the field of explainable AI (XAI) has produced a vast collection of algorithms, providing a useful toolbox for researchers and practitioners to build XAI applications. With the rich application opportunities, explainability is believed to have moved beyond a demand by data scientists or researchers to comprehend the models they develop, to an essential requirement for people to trust and adopt AI deployed in numerous domains. However, explainability is an inherently human-centric property and the field is starting to embrace human-centered approaches. Human-computer interaction (HCI) research and user experience (UX) design in this area are becoming increasingly important. In this chapter, we begin with a high-level overview of the technical landscape of XAI algorithms, then selectively survey our own and other recent HCI works that take human-centered approaches to design, evaluate, and provide conceptual and methodological tools for XAI. We ask the question "what are human-centered approaches doing for XAI" and highlight three roles that they play in shaping XAI technologies by helping navigate, assess and expand the XAI toolbox: to drive technical choices by users' explainability needs, to uncover pitfalls of existing XAI methods and inform new methods, and to provide conceptual frameworks for human-compatible XAI.

Human-Centered Explainable AI (XAI): From Algorithms to User Experiences

The paper outlines a comprehensive exploration of the evolving field of Explainable AI (XAI), emphasizing the vital role of human-centered approaches. As AI systems burgeon in complexity and application, the need for interpretability has become more pronounced, driving research beyond algorithmic transparency to focus on the user experience. The paper proposes that the ultimate aim of XAI is to engender trust and adoption by making AI systems understandable to diverse stakeholders, ranging from developers to end-users in high-stakes domains.

A significant focus of the paper is on the vast toolbox of XAI techniques, underscoring that no single solution fits all use cases due to varied user goals, backgrounds, and contexts. Two primary avenues for XAI are delineated: choosing inherently interpretable models and applying post-hoc explanation methods to complex models. This approach recognizes the trade-offs between model performance and interpretability, advocating for informed selections based on specific user needs.

The authors highlight three pivotal roles human-centered approaches play in advancing XAI: guiding technical choices based on user explainability needs, revealing the limitations of current methods through empirical assessment, and fostering new computational and design frameworks by integrating theories from cognitive and social sciences. These roles align with broader objectives to create XAI systems that are not only technically robust but also aligned with human cognitive capacities and social contexts.

Empirical studies are critical in evaluating the effectiveness of XAI methods, particularly in real-world settings where the goal extends beyond understanding to achieving actionable insights. The paper discusses challenges such as over-reliance on algorithmic explanations and the cognitive biases that may arise from user interplay with AI systems. Addressing these issues requires both technical refinements and thoughtful design interventions to enhance user engagement and understanding.

The juxtaposition of human cognitive processes with algorithmic assumptions informs a key argument of the paper: the social and interactive nature of explanations. Insights from the social sciences, such as the conversational nature of human explanations and the selective presentation of information, offer pathways to develop more intuitive and effective XAI systems. This perspective frames XAI as an interaction problem, necessitating attention to how users process and apply explanations in decision-making contexts.

Finally, framing XAI within the broader sociotechnical landscape underscores the authors' advocacy for participatory, interdisciplinary approaches to research and design in AI. By embedding XAI within real-world organizational and social systems, users are not only better equipped to interact with AI but also to derive meaningful insights and actions.

In summary, the paper serves as a robust foundation for understanding the multifaceted approaches required in developing and deploying human-centered XAI systems. It provides a roadmap for bridging technical and user-experience gaps, fostering the creation of AI technologies that are inherently trustworthy and intelligible. Future research directions will benefit from continued interdisciplinary collaboration and a steadfast focus on the practical implications of XAI in diverse domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Q. Vera Liao (49 papers)
  2. Kush R. Varshney (121 papers)
Citations (170)
Youtube Logo Streamline Icon: https://streamlinehq.com