Understanding Explainable AI
Stakeholder Desiderata in XAI
Explainable Artificial Intelligence (XAI) primarily aims at developing AI systems that are transparent and understandable to humans who interact with or are affected by these systems. A substantial body of XAI research has revolved around the creation of novel methods without necessarily considering whether these methods effectively meet the needs and expectations of different stakeholders involved with AI systems. Recognizing the varying interests, goals, and demands of these stakeholders is crucial, as they drive the push for explainability in artificial systems. These stakeholders may include users, developers, parties impacted by AI decisions, deployers, and regulators, each with their distinct set of "desiderata" or desired outcomes.
Promoting Understanding Through Explainability
One of the main contributions of the paper under discussion is its focus on the role of understanding in satisfying stakeholder desiderata. Understanding is postulated as a mediator that bridges the gap between the information provided by an explainability approach and achieving a stakeholder’s objectives. It is not merely seen as a desirable outcome but as a vehicle for achieving varied stakeholder-specific aims, such as fairness, trust, or legality in the application of AI systems.
The Nature of Explanatory Information
Explanatory information is core to fostering understanding. However, different stakeholders may require various forms and depths of explanation personalized to their level of expertise and the context of use. For example, a novice user and an AI developer might derive understanding from different types of explanations, leading to divergent impacts on their respective desiderata. The nature and presentation of explanatory information—whether statistical, contrastive, or causal—are pivotal for facilitating the degree of understanding necessary for stakeholders.
Selecting and Developing Explainability Approaches
The development and selection of explainability approaches require careful consideration of the types of explanations they generate and their relevance to specific stakeholder needs. Customary approaches fall into ante-hoc or inherently explainable systems and post-hoc or after-the-fact explanations. The chosen approach should align with the type of explanatory information required to advance stakeholders’ understanding accordance with their unique set of desiderata.
Interdisciplinary Opportunities and Insights
The paper emphasizes the interdisciplinary potential in addressing explainability in AI. It calls for psychologists to design empirical studies, philosophers to give definitions and guidelines, legal experts to bring normative constraints, and computer scientists to innovate at the technical frontier. This collaborative effort is seen as key to comprehensively addressing stakeholders’ desiderata, thereby yielding AI systems that are explainable, transparent, and ultimately more trustworthy.
In summary, XAI research must converge on the intersection of providing explanations that enhance understanding and thus satisfy the vast landscape of stakeholders' needs. This involves an iterative, empirical process of evaluating and refining explainability approaches, enriched by insights from diverse academic and practical disciplines, ensuring that as AI systems grow more complex, their workings remain accessible and comprehensible to all who are affected by them.