Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences (1712.00547v2)

Published 2 Dec 2017 in cs.AI

Abstract: In his seminal book The Inmates are Running the Asylum: Why High-Tech Products Drive Us Crazy And How To Restore The Sanity' [2004, Sams Indianapolis, IN, USA], Alan Cooper argues that a major reason why software is often poorly designed (from a user perspective) is that programmers are in charge of design decisions, rather than interaction designers. As a result, programmers design software for themselves, rather than for their target audience, a phenomenon he refers to as theinmates running the asylum'. This paper argues that explainable AI risks a similar fate. While the re-emergence of explainable AI is positive, this paper argues most of us as AI researchers are building explanatory agents for ourselves, rather than for the intended users. But explainable AI is more likely to succeed if researchers and practitioners understand, adopt, implement, and improve models from the vast and valuable bodies of research in philosophy, psychology, and cognitive science, and if evaluation of these models is focused more on people than on technology. From a light scan of literature, we demonstrate that there is considerable scope to infuse more results from the social and behavioural sciences into explainable AI, and present some key results from these fields that are relevant to explainable AI.

A Critical Analysis of "Explainable AI: Beware of Inmates Running the Asylum"

The paper "Explainable AI: Beware of Inmates Running the Asylum" by Tim Miller, Piers Howe, and Liz Sonenberg highlights a nuanced critique of the current trajectory in explainable artificial intelligence (XAI) research. By drawing parallels to Alan Cooper's argument about design practices in software engineering, the authors suggest that AI researchers might fall into the trap of designing explanations that are more comprehensible to themselves rather than to the actual users. The principal thesis is that to create effective XAI systems, it is imperative to incorporate insights from social and behavioral sciences.

Key Insights and Contributions

1. Evaluation of Current XAI Practices:

The authors conducted a broad survey to assess the influence of social sciences on XAI research. The survey examined 23 articles from a workshop list for relevance and inspiration from social sciences. They found that only a few articles were grounded in social science models or included human behavioral studies in their evaluations. The findings suggest a significant gap in how XAI research is influenced by, or incorporates, social science insights.

2. Overview of Social Science Contributions:

The paper advocates for the integration of explanatory models derived from the social and behavioral sciences. These include:

  • Contrastive Explanation: Explaining "Why P instead of Q?" suggests that explanations are more meaningful when they address potential contrasts.
  • Attribution Theory: Differentiating between social and causal attributions provides a framework for how people attribute explanations for events.
  • Explanation Selection and Evaluation: Human preference for simpler and more coherent explanations can inform the design of AI explainability.

3. Implications for XAI Models:

The potential for AI models to benefit from these social science frameworks is significant. By understanding how humans naturally process explanations, AI systems can provide explanations that align with human cognition. For instance, leveraging contrastive explanations can help narrow down the focus of AI outputs, creating more relatable and accessible interpretations for users.

Analytical Observations

The critique offered by the authors challenges XAI researchers to step beyond computational effectiveness and consider the user-centric perspective in explanation design. The reliance on social sciences to guide explanation models emphasizes the human-centric nature of AI, emphasizing that the ultimate goal of any XAI should be comprehension by non-expert users.

Potential for Future Research

The outlined framework of integrating social and behavioral sciences into XAI offers fertile ground for future research. Investigations could aim to:

  • Develop methodologies to systematically derive user preferences and requirements for explanations.
  • Construct collaborative initiatives between AI developers and social scientists to foster cross-disciplinary innovation in XAI.
  • Design empirical studies that test the efficacy of socially-inspired explanation models in enhancing user trust and understanding.

Conclusion

The paper underscores the importance of interdisciplinary collaboration in advancing XAI. By borrowing from established theories in social and behavioral sciences, AI researchers can create more effective, user-friendly systems. Consequently, this advocacy for interdisciplinary approaches can lead to the development of XAI models that are not only functionally transparent but are also introspectively aligned with user expectations and understanding. Such advancements herald a significant step toward achieving practical and comprehensible AI systems in real-world applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Tim Miller (53 papers)
  2. Piers Howe (3 papers)
  3. Liz Sonenberg (16 papers)
Citations (352)