The Pragmatic Turn in Explainable Artificial Intelligence
The paper "The Pragmatic Turn in Explainable Artificial Intelligence (XAI)" by Andrés Páez presents a compelling shift in the conceptual framework of Explainable AI by advocating for a pragmatic and naturalistic approach to understanding, rather than simply pursuing explainability or interpretability in machine learning systems. Páez critiques existing methodologies in XAI, which focus heavily on explanation as a means to render AI systems understandable, suggesting instead that understanding should be the primary focus. The paper explores philosophical and psychological literature to refine our understanding of this concept, proposing that interpretative or approximation models are necessary for true comprehension of machine learning systems and post-hoc interpretability.
Key Arguments and Concepts
- Understanding Over Explanation: Páez challenges the traditional reliance on explanation as the primary goal of XAI. He argues that this approach is constrained by the factivity condition – the requirement that explanations must be true – which is often unfeasible for complex, opaque AI models. Instead, he posits that achieving understanding without strict adherence to factivity can be more realistic and beneficial.
- Epistemological Inquiry: The paper discusses understanding as a distinct epistemic concept from knowledge, noting that understanding is not necessarily factive. This allows for the use of models and idealizations that may involve "felicitous falsehoods," providing utility without full factual accuracy.
- Contextual Considerations: Páez emphasizes the importance of considering the background and context of stakeholders using AI systems. Understanding should be tailored to these factors, rather than relying on a uniform explanation model. This approach suggests interdisciplinary collaboration, bringing insights from psychology and cognitive science into XAI.
- Alternative Paths to Understanding: The paper explores various devices and methods that can foster understanding, such as models, simulations, and thought experiments. These approaches provide non-propositional representations conducive to comprehension, without necessitating complete factual transparency.
- Functional Versus Mechanistic Understanding: Páez introduces the distinction between functional understanding, which focuses on the role and purpose of AI systems, and mechanistic understanding, which explores the processes and structures. While the paper acknowledges that functional understanding offers a model-independent perspective, it argues that mechanistic understanding remains crucial for generating trust and accountability in AI systems.
Implications
The paper's shift toward emphasizing understanding has far-reaching implications for the development and deployment of AI systems. From a theoretical perspective, it redefines the objectives of XAI research, encouraging a deeper inquiry into cognitive and pragmatic factors that contribute to human understanding. Practically, adopting a focus on understanding rather than explanation could influence the creation of interpretable models and tools that better accommodate user needs and cognitive biases.
Moving forward, XAI could benefit from empirical studies that investigate how different interpretative models affect user comprehension across diverse contexts. Collaboration with fields such as design, psychology, and cognitive science seems paramount to developing user-friendly, intuitive interfaces for interaction with AI systems. Understanding complex systems and their decisions in a way that fits the user's knowledge and goals can enhance the perceived accuracy and reliability of AI, thus fostering trust and enabling ethical and legal accountability.
Conclusion
"The Pragmatic Turn in Explainable Artificial Intelligence" proposes an overhaul in the pursuit of interpretability in AI systems by emphasizing understanding as a primary goal. This pragmatic and naturalistic approach opens new avenues for enhancing user interaction with AI, advocating for tools and models that promote comprehension without rigid adherence to factual explanation. Páez's insights challenge researchers to reconsider the aims of XAI, embracing a broader and more versatile framework for understanding AI systems and their decision-making processes.