Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI) (2005.13997v1)

Published 26 May 2020 in cs.AI

Abstract: Recently, a groundswell of research has identified the use of counterfactual explanations as a potentially significant solution to the Explainable AI (XAI) problem. It is argued that (a) technically, these counterfactual cases can be generated by permuting problem-features until a class change is found, (b) psychologically, they are much more causally informative than factual explanations, (c) legally, they are GDPR-compliant. However, there are issues around the finding of good counterfactuals using current techniques (e.g. sparsity and plausibility). We show that many commonly-used datasets appear to have few good counterfactuals for explanation purposes. So, we propose a new case based approach for generating counterfactuals using novel ideas about the counterfactual potential and explanatory coverage of a case-base. The new technique reuses patterns of good counterfactuals, present in a case-base, to generate analogous counterfactuals that can explain new problems and their solutions. Several experiments show how this technique can improve the counterfactual potential and explanatory coverage of case-bases that were previously found wanting.

Case-Based Counterfactual Generation for Explainable AI

This paper addresses the application of Case-Based Reasoning (CBR) to the generation of counterfactual explanations within the domain of Explainable AI (XAI). Recent XAI literature suggests counterfactual explanations are preferable to factual ones due to their higher causal informativeness and compliance with legal standards such as GDPR. Yet, generating "good" counterfactuals—those that are sparse, plausible, and directly impactful on the decision boundary—is often challenging.

Introduction

The authors contextualize the paper within the historical role of CBR in providing explanations akin to human reasoning through precedents. Traditional CBR approaches utilize factual cases, but this paper focuses on counterfactual cases—nearest unlike neighbors—that aim to elucidate potential changes to predictions. The authors explore the existing scarcity of good counterfactuals in CBR case-bases and propose a twin-systems approach using a case-based methodology to enhance explanatory competence in opaque deep learning models.

Methodology

The paper introduces the notion of counterfactual potential in case-bases, measured by assessing the proportion of case pairs that qualify as good counterfactuals. An examination of 20 datasets reveals that existing case-bases contain few counterfactuals that meet this criterion. Consequently, the authors propose a novel technique that leverages the structure of good counterfactuals within case-bases to create new explanations for novel queries.

Case-Based Counterfactual Generation Technique

This proposed technique involves:

  1. Identifying explanation cases (XCs) from the case-base that serve as clues for generating new counterfactuals.
  2. Building new counterfactuals by transfering values from the XC to the query case while preserving the sparsity and plausibility of modifications.
  3. Utilizing the underlying ML model to validate the class change.
  4. Adapting if necessary, by evaluating nearest neighbors to achieve valid class changes when initial candidate counterfactuals fail.

Experimental Findings

Two experiments underpin the paper. The first maps the counterfactual potential across standard datasets, demonstrating the paucity of naturally occurring good counterfactuals. The second evaluates the proposed technique across various datasets, showing substantial improvement in explanatory coverage with synthetic counterfactuals. The adaptation step further optimizes counterfactual distance, proving critical in achieving closer yet effective alterations.

Implications and Future Work

The authors achieve a considerable advancement in leveraging CBR for counterfactual XAI, addressing sparsity and plausibility limitations prevalent in perturbation-based approaches. The paper emphasizes the need for explanation competence—a parallel to predictive competence—and offers a systematic approach backed by empirical results to enhance the utility of counterfactual explanations in AI systems.

Future work should incorporate extensive user trials to better understand psychological aspects influencing the effectiveness of explanations and explore the applicability across a wider range of datasets and domains. Overall, the application of CBR for generating counterfactuals offers promising directions for improving transparency in complex AI systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Mark T. Keane (27 papers)
  2. Barry Smyth (23 papers)
Citations (136)
Youtube Logo Streamline Icon: https://streamlinehq.com