Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach (2002.01092v2)

Published 4 Feb 2020 in cs.HC and cs.AI

Abstract: Explanations--a form of post-hoc interpretability--play an instrumental role in making systems accessible as AI continues to proliferate complex and sensitive sociotechnical systems. In this paper, we introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design. It develops a holistic understanding of "who" the human is by considering the interplay of values, interpersonal dynamics, and the socially situated nature of AI systems. In particular, we advocate for a reflective sociotechnical approach. We illustrate HCXAI through a case study of an explanation system for non-technical end-users that shows how technical advancements and the understanding of human factors co-evolve. Building on the case study, we lay out open research questions pertaining to further refining our understanding of "who" the human is and extending beyond 1-to-1 human-computer interactions. Finally, we propose that a reflective HCXAI paradigm-mediated through the perspective of Critical Technical Practice and supplemented with strategies from HCI, such as value-sensitive design and participatory design--not only helps us understand our intellectual blind spots, but it can also open up new design and research spaces.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Upol Ehsan (16 papers)
  2. Mark O. Riedl (57 papers)
Citations (184)

Summary

Human-Centered Explainable AI: A Reflective Sociotechnical Approach

The paper "Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach" introduces Human-centered Explainable AI (HCXAI) as a paradigm that prioritizes the human in designing AI systems. It posits that as AI systems are deployed in sensitive sociotechnical contexts, providing explanations is vital for accessibility and acceptability. Unlike traditional machine-centered interpretability approaches, HCXAI emphasizes a sociotechnical perspective considering the interplay of human values, social dynamics, and the embedded nature of AI in social environments.

The authors advocate a reflective approach to HCXAI, aiming to balance technological advancements with human factor understanding, illustrating this through a case paper on explanation generation for non-technical users. HCXAI, through this paradigm, seeks to refine the understanding of the human involved and proposes extending beyond one-to-one human-computer interactions by incorporating Critical Technical Practice (CTP), value-sensitive design, and participatory design methodologies.

Case Study Insights

The paper presents a two-phase case paper employing a rationale generation approach, producing natural language explanations from AI agent behaviors in gameplay scenarios. In the first phase, the research establishes the technical feasibility of generating rationales using a semi-synthetic corpus from a reinforcement learning agent's gameplay. Human evaluations showed the generated rationales were accurate and user-satisfactory. This phase provided initial insights into user satisfaction dimensions that guided future paper designs.

In phase two, the research focused on creating a fully natural corpus and diversifying rationale generation through different network configurations, showing how understanding of human factors co-evolves with technology. This phase employed qualitative human-based evaluations to discern user perception differences based on rationale configuration, revealing emergent user preferences and dimensions such as confidence, human-likeness, and explanatory power—thus refining the "who" in HCXAI systems.

Implications and Future Directions

The implications of HCXAI are multifaceted. Practically, it stresses the need for AI systems to embed social signals and accommodate diverse user backgrounds to enhance explanation effectiveness, especially in multi-stakeholder environments. For AI research, it suggests a pivot towards sociotechnical approaches that engage deeply with human and social factors. The theoretical implications encourage deconstructing prevailing narratives in AI to explore marginalized perspectives, and new methodologies that privilege human-centered values and democratic design practices.

With AI systems deeply embedded in social settings, HCXAI calls for careful examination of the human-AI relationship. Implementing a reflective paradigm through CTP and complementary strategies like value-sensitive design and participatory design fosters critical reflection, addressing underlying assumptions that define design and human engagement. Such approaches in AI can potentially open new design spaces, addressing epistemological blind spots and creating socially aware AI systems.

Conclusion

In conclusion, the paper underscores the importance of a human-centered approach in explainable AI design, advocating for a reflective sociotechnical lens that integrates technical advancements with human understandings. By doing so, HCXAI aims not just to elucidate AI operations for end-users, but to enrich AI systems with human values, fostering a paradigm of explanation that respects and engages with the richness of human social contexts. Researchers are invited to question dominant narratives critically, integrating innovative practices that consider the depths of human-machine interaction beyond traditional boundaries.