Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What Does Explainable AI Really Mean? A New Conceptualization of Perspectives (1710.00794v1)

Published 2 Oct 2017 in cs.AI

Abstract: We characterize three notions of explainable AI that cut across research fields: opaque systems that offer no insight into its algo- rithmic mechanisms; interpretable systems where users can mathemat- ically analyze its algorithmic mechanisms; and comprehensible systems that emit symbols enabling user-driven explanations of how a conclusion is reached. The paper is motivated by a corpus analysis of NIPS, ACL, COGSCI, and ICCV/ECCV paper titles showing differences in how work on explainable AI is positioned in various fields. We close by introducing a fourth notion: truly explainable systems, where automated reasoning is central to output crafted explanations without requiring human post processing as final step of the generative process.

Citations (424)

Summary

  • The paper defines explainable AI through three core notions—opaque, interpretable, and comprehensible—and introduces a fourth system that integrates automated reasoning.
  • The authors conduct corpus analysis across major AI conferences, revealing how different research communities focus on explainability.
  • The framework promotes interdisciplinary approaches by linking neural-symbolic integration and cognitive models to achieve autonomous, human-aligned AI explanations.

A New Framework for Explainable AI

The paper "What Does Explainable AI Really Mean?" by Derek Doran, Sarah Schulz, and Tarek R. Besold offers a meticulous exploration of the multifaceted nature of explainable AI (XAI). The authors address the complexity and necessity of developing AI systems that can offer explanations for their decision-making processes, which is crucial for accountability, especially in sensitive scenarios with significant ethical, safety, or legal ramifications.

Conceptual Framework

The authors delineate three core notions of explainability: opaque systems, interpretable systems, and comprehensible systems. They critique the current literature for its vague usage of terminology around interpretability and propose these distinct categories to clarify the discourse.

  • Opaque Systems: These systems operate as metaphorical black boxes, providing outputs without any transparency about the internal decision-making process. They contrast sharply with the necessity for systems that can offer insight into their functioning, particularly when decisions have impactful consequences.
  • Interpretable Systems: Here, the system's algorithmic processes are mathematically transparent, allowing users to logically follow the decision path through explicit model parameters and structures. Interpretable systems facilitate a technical understanding, making the decision-making process accessible to users with adequate technical expertise.
  • Comprehensible Systems: These systems produce symbols that assist users in constructing explanations for outputs. The comprehensibility is user-dependent, meaning the degree to which these systems are comprehensible can vary based on the user's background knowledge and cognitive abilities.

The paper further proposes a fourth notion—a truly explainable system that integrates automated reasoning methods to yield human-understandable explanations for decisions without human mediation in the final explanation process. This vision aspires for AI systems that can autonomously provide coherent and contextually meaningful explanations, thus reducing the variability introduced by human interpretation.

Corpus Analysis

In addition to conceptual frameworks, the authors utilize corpus analysis across publications from prominent AI-related conferences, such as NIPS, ACL, COGSCI, and ICCV/ECCV. They seek to uncover variations in the focus and treatment of explainability concepts across different research communities. The corpus statistics demonstrate a differential emphasis on explainability, influenced by each field's methodological priorities and terminological preferences. For example, Cognitive Science shows significantly higher engagement with explainability, likely due to its inherent focus on understanding human cognition and processes, whereas Computer Vision and NLP communities often treat explainability in the context of models and features.

Implications for Future Research

The paper’s findings carry significant implications for future research directions in XAI. The transition toward truly explainable systems suggests a need for interdisciplinary collaboration, pulling insights from neural-symbolic integration studies and exploring human cognitive patterns in comprehensibility. By furthering this line of inquiry, researchers could facilitate the development of AI systems that not only offer logical clarity but also align explanations with human cognitive models.

Additionally, the recognition that XAI must incorporate reasoning engines highlights an area ripe for innovative approaches. By embedding domain-specific knowledge bases and logic constructs within AI systems, the research community can advance toward models capable of crafting fully automated, user-aligned justifications for decisions.

Conclusion

In summary, the paper provides a robust conceptual framework for understanding, researching, and implementing explainable AI systems. By distinguishing between different types of explainability and advocating for the integration of reasoning in AI models, the authors lay groundwork that can guide both theoretical advancements and practical implementations of AI systems that are better aligned with ethical and functional requirements of accountability and transparency. This work encourages the AI research community to pursue more sophisticated means of rendering machine learning models not just interpretable or comprehensible, but genuinely explainable to human users.

Youtube Logo Streamline Icon: https://streamlinehq.com