Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Pragmatic Turn in Explainable Artificial Intelligence (XAI) (2002.09595v1)

Published 22 Feb 2020 in cs.AI, cs.CY, cs.HC, and cs.LG

Abstract: In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will lack a well-defined goal. Aside from providing a clearer objective for XAI, focusing on understanding also allows us to relax the factivity condition on explanation, which is impossible to fulfill in many machine learning models, and to focus instead on the pragmatic conditions that determine the best fit between a model and the methods and devices deployed to understand it. After an examination of the different types of understanding discussed in the philosophical and psychological literature, I conclude that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post-hoc interpretability. This conclusion is partly based on the shortcomings of the purely functionalist approach to post-hoc interpretability that seems to be predominant in most recent literature.

The Pragmatic Turn in Explainable Artificial Intelligence

The paper "The Pragmatic Turn in Explainable Artificial Intelligence (XAI)" by Andrés Páez presents a compelling shift in the conceptual framework of Explainable AI by advocating for a pragmatic and naturalistic approach to understanding, rather than simply pursuing explainability or interpretability in machine learning systems. Páez critiques existing methodologies in XAI, which focus heavily on explanation as a means to render AI systems understandable, suggesting instead that understanding should be the primary focus. The paper explores philosophical and psychological literature to refine our understanding of this concept, proposing that interpretative or approximation models are necessary for true comprehension of machine learning systems and post-hoc interpretability.

Key Arguments and Concepts

  1. Understanding Over Explanation: Páez challenges the traditional reliance on explanation as the primary goal of XAI. He argues that this approach is constrained by the factivity condition – the requirement that explanations must be true – which is often unfeasible for complex, opaque AI models. Instead, he posits that achieving understanding without strict adherence to factivity can be more realistic and beneficial.
  2. Epistemological Inquiry: The paper discusses understanding as a distinct epistemic concept from knowledge, noting that understanding is not necessarily factive. This allows for the use of models and idealizations that may involve "felicitous falsehoods," providing utility without full factual accuracy.
  3. Contextual Considerations: Páez emphasizes the importance of considering the background and context of stakeholders using AI systems. Understanding should be tailored to these factors, rather than relying on a uniform explanation model. This approach suggests interdisciplinary collaboration, bringing insights from psychology and cognitive science into XAI.
  4. Alternative Paths to Understanding: The paper explores various devices and methods that can foster understanding, such as models, simulations, and thought experiments. These approaches provide non-propositional representations conducive to comprehension, without necessitating complete factual transparency.
  5. Functional Versus Mechanistic Understanding: Páez introduces the distinction between functional understanding, which focuses on the role and purpose of AI systems, and mechanistic understanding, which explores the processes and structures. While the paper acknowledges that functional understanding offers a model-independent perspective, it argues that mechanistic understanding remains crucial for generating trust and accountability in AI systems.

Implications

The paper's shift toward emphasizing understanding has far-reaching implications for the development and deployment of AI systems. From a theoretical perspective, it redefines the objectives of XAI research, encouraging a deeper inquiry into cognitive and pragmatic factors that contribute to human understanding. Practically, adopting a focus on understanding rather than explanation could influence the creation of interpretable models and tools that better accommodate user needs and cognitive biases.

Moving forward, XAI could benefit from empirical studies that investigate how different interpretative models affect user comprehension across diverse contexts. Collaboration with fields such as design, psychology, and cognitive science seems paramount to developing user-friendly, intuitive interfaces for interaction with AI systems. Understanding complex systems and their decisions in a way that fits the user's knowledge and goals can enhance the perceived accuracy and reliability of AI, thus fostering trust and enabling ethical and legal accountability.

Conclusion

"The Pragmatic Turn in Explainable Artificial Intelligence" proposes an overhaul in the pursuit of interpretability in AI systems by emphasizing understanding as a primary goal. This pragmatic and naturalistic approach opens new avenues for enhancing user interaction with AI, advocating for tools and models that promote comprehension without rigid adherence to factual explanation. Páez's insights challenge researchers to reconsider the aims of XAI, embracing a broader and more versatile framework for understanding AI systems and their decision-making processes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Andrés Páez (7 papers)
Citations (174)