Case-Based Counterfactual Generation for Explainable AI
This paper addresses the application of Case-Based Reasoning (CBR) to the generation of counterfactual explanations within the domain of Explainable AI (XAI). Recent XAI literature suggests counterfactual explanations are preferable to factual ones due to their higher causal informativeness and compliance with legal standards such as GDPR. Yet, generating "good" counterfactuals—those that are sparse, plausible, and directly impactful on the decision boundary—is often challenging.
Introduction
The authors contextualize the paper within the historical role of CBR in providing explanations akin to human reasoning through precedents. Traditional CBR approaches utilize factual cases, but this paper focuses on counterfactual cases—nearest unlike neighbors—that aim to elucidate potential changes to predictions. The authors explore the existing scarcity of good counterfactuals in CBR case-bases and propose a twin-systems approach using a case-based methodology to enhance explanatory competence in opaque deep learning models.
Methodology
The paper introduces the notion of counterfactual potential in case-bases, measured by assessing the proportion of case pairs that qualify as good counterfactuals. An examination of 20 datasets reveals that existing case-bases contain few counterfactuals that meet this criterion. Consequently, the authors propose a novel technique that leverages the structure of good counterfactuals within case-bases to create new explanations for novel queries.
Case-Based Counterfactual Generation Technique
This proposed technique involves:
- Identifying explanation cases (XCs) from the case-base that serve as clues for generating new counterfactuals.
- Building new counterfactuals by transfering values from the XC to the query case while preserving the sparsity and plausibility of modifications.
- Utilizing the underlying ML model to validate the class change.
- Adapting if necessary, by evaluating nearest neighbors to achieve valid class changes when initial candidate counterfactuals fail.
Experimental Findings
Two experiments underpin the paper. The first maps the counterfactual potential across standard datasets, demonstrating the paucity of naturally occurring good counterfactuals. The second evaluates the proposed technique across various datasets, showing substantial improvement in explanatory coverage with synthetic counterfactuals. The adaptation step further optimizes counterfactual distance, proving critical in achieving closer yet effective alterations.
Implications and Future Work
The authors achieve a considerable advancement in leveraging CBR for counterfactual XAI, addressing sparsity and plausibility limitations prevalent in perturbation-based approaches. The paper emphasizes the need for explanation competence—a parallel to predictive competence—and offers a systematic approach backed by empirical results to enhance the utility of counterfactual explanations in AI systems.
Future work should incorporate extensive user trials to better understand psychological aspects influencing the effectiveness of explanations and explore the applicability across a wider range of datasets and domains. Overall, the application of CBR for generating counterfactuals offers promising directions for improving transparency in complex AI systems.