Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explanation in Artificial Intelligence: Insights from the Social Sciences (1706.07269v3)

Published 22 Jun 2017 in cs.AI

Abstract: There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to make their algorithms more understandable. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a `good' explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations towards the explanation process. This paper argues that the field of explainable artificial intelligence should build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Tim Miller (53 papers)
Citations (3,879)

Summary

  • The paper demonstrates that explanations are inherently contrastive, focusing on why events occur relative to plausible alternatives.
  • The paper shows that incorporating human cognitive biases can simplify and improve the relatability of AI-generated explanations.
  • The paper highlights that explanations function as social interactions, advocating for interactive dialogue systems to boost user trust.

Insights on Explanation in Artificial Intelligence from Social Sciences

The paper "Explanation in Artificial Intelligence: Insights from the Social Sciences" by Tim Miller aims to bridge the gap between the fields of explainable artificial intelligence (XAI) and social sciences, specifically focusing on how humans understand and generate explanations. The paper posits that leveraging findings from philosophy, psychology, and cognitive science can significantly enhance the effectiveness of XAI.

Key Findings and Theoretical Integration

Contrastive Nature of Explanations

One of the central themes of the paper is the assertion that explanations are inherently contrastive. Human explanations typically address why an event (P) occurred instead of some other event (Q), known as the foil. This perspective aligns with the cognitive processes involved in explanation, where people often seek to understand specific differences between plausible alternatives rather than generating exhaustive causal histories. This concept is crucial for XAI as it indicates that explanations should focus on distinguishing between the event that occurred and possible alternatives that did not.

Cognitive Bias in Explanation Selection

The paper also highlights that humans employ certain cognitive biases and heuristics when selecting explanations. Notably, people tend to prefer explanations that are simple, general, and coherent with their prior knowledge. Moreover, humans often select explanations based on abnormal or unexpected events while discounting routine occurrences. For XAI, this implies that AI systems should prioritize and present explanations that align with these human cognitive tendencies to be more intuitive and acceptable to users.

Social and Interactive Nature of Explanations

Explanations are not just factual recounts but are social interactions aimed at transferring understanding between individuals. The conversational model, supported by works like Grice's maxims, emphasizes that explanations should be relevant, concise, and conveyed in a manner that is easy to understand. This social dimension necessitates that AI systems are designed to engage in interactive dialogues, tailoring explanations based on the explainee’s background knowledge and the context of the inquiry.

Practical and Theoretical Implications

The integration of social science theories into XAI has several implications. Practically, it suggests that AI systems must be equipped to handle contrastive questions effectively, selecting and presenting explanations that align with human cognitive biases. This could enhance user trust and satisfaction, especially in domains where transparency and interpretability are critical, such as healthcare, autonomous driving, and legal decision-making.

Theoretically, the paper opens new avenues for interdisciplinary research. Collaboration between AI researchers and social scientists can lead to the development of more robust models of explanation that account for the intricate ways humans process and evaluate explanatory information. This could, in turn, inform the design of algorithms and interfaces that better serve end-users.

Future Directions in XAI

The paper speculates on future developments in AI, particularly the need for models that support the interactive and social nature of explanations. Future research could focus on:

  1. Developing Interactive Dialogue Systems: Systems that engage users in explanatory dialogues, adjusting explanations based on user feedback and questions.
  2. Implementing Cognitive Bias Models: Incorporating models of human cognitive biases into AI to prioritize explanations that are most likely to be accepted and understood by users.
  3. Creating Personalized Explanation Frameworks: Tailoring explanations based on individual user profiles, taking into account their knowledge, preferences, and usage context.

In conclusion, the paper by Tim Miller provides a comprehensive overview of how insights from the social sciences can be leveraged to advance the field of explainable AI. By understanding and mimicking the ways humans generate and interpret explanations, AI systems can become more intuitive, trustworthy, and ultimately more effective in their interactions with human users.

X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com