Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers (1912.03277v3)

Published 6 Dec 2019 in cs.LG, cs.AI, and stat.ML

Abstract: To construct interpretable explanations that are consistent with the original ML model, counterfactual examples---showing how the model's output changes with small perturbations to the input---have been proposed. This paper extends the work in counterfactual explanations by addressing the challenge of feasibility of such examples. For explanations of ML models in critical domains such as healthcare and finance, counterfactual examples are useful for an end-user only to the extent that perturbation of feature inputs is feasible in the real world. We formulate the problem of feasibility as preserving causal relationships among input features and present a method that uses (partial) structural causal models to generate actionable counterfactuals. When feasibility constraints cannot be easily expressed, we consider an alternative mechanism where people can label generated CF examples on feasibility: whether it is feasible to intervene and realize the candidate CF example from the original input. To learn from this labelled feasibility data, we propose a modified variational auto encoder loss for generating CF examples that optimizes for feasibility as people interact with its output. Our experiments on Bayesian networks and the widely used ''Adult-Income'' dataset show that our proposed methods can generate counterfactual explanations that better satisfy feasibility constraints than existing methods.. Code repository can be accessed here: \textit{https://github.com/divyat09/cf-feasibility}

Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers

The paper addresses the challenge of generating counterfactual (CF) explanations for ML models, particularly focusing on ensuring these explanations satisfy feasibility constraints in real-world applications. Counterfactual explanations have gained attention for their potential to provide intuitive insights into ML decisions by illustrating how small changes in input features can alter the outcome of a model. However, ensuring that these counterfactuals are feasible—that is, they obey the natural and causal constraints present in the real world—is critical, especially in sensitive domains like healthcare and finance.

Core Contributions and Methodology

The authors advance the current understanding of counterfactual explanations by framing the problem of feasibility as a causal issue. This paper provides a novel approach by utilizing structural causal models (SCMs) to enforce causal constraints while generating counterfactuals. The following are the primary contributions of the paper:

  1. Causal View of Feasibility: The paper formalizes feasibility in CF examples via causal models, stipulating that an example is only feasible if it adheres to the causal dependencies within input features. This is in contrast to prior work which relied primarily on statistical constraints derived from data distribution.
  2. Causal Proximity Regularizer: The authors propose a causal proximity loss that replaces traditional Euclidean distances in the CF generation process. This regularizer ensures that any endogenous variable’s modifications adhere to causal principles dictated by its parent nodes in an SCM, thereby preserving causal relationships.
  3. Generative Modeling Approach: A variational autoencoder (VAE)-based model is proposed, allowing CF generation to be more practical, computationally efficient, and adaptable to feasibility constraints. The approach enhances previously static CF models with dynamic learning capabilities that can incorporate user feedback on feasibility.
  4. Example-Based CF Generation: When explicit feasibility constraints cannot be predetermined, this method learns constraints through user interaction. Users label generated CF examples with respect to their feasibility, allowing the model to iteratively refine its generation process.

Experimental Evaluation and Results

The research includes empirical evaluations on both synthetic Bayesian network instances and the popular Adult Income dataset. The results showcase the efficacy of the proposed methods in achieving higher compliance with feasibility constraints compared to traditional methods such as contrastive explanations. Key findings include:

  • The proposed methods demonstrated superior feasibility satisfaction, particularly with the Example-Based CF model which effectively learned from user feedback to generate more feasible counterfactuals.
  • In computational efficiency, the generative model exhibited significant advantages over baseline methods, providing a scalable solution suitable for applications requiring frequent CF generation.

Implications and Future Directions

The paper lays the groundwork for counterfactual explanations that are not only interpretable but also actionable in real-world settings. By embedding causal reasoning into the process, the paper moves the needle towards generating CF examples that are not only theoretically sound but also practically implementable.

Looking forward, this work opens avenues for integrating more sophisticated causal models and leveraging domain-specific knowledge to enhance the richness and applicability of CFs across various sectors. Further enhancement of interactive systems that learn feasibility constraints from user feedback holds promise in expanding the adaptability and personalization of AI explanations.

Conclusion

In conclusion, the research offers a significant advancement in the generation of counterfactual explanations by addressing the crucial aspect of feasibility through causal models. This ensures that explanations are not only consistent with ML model predictions but also resonant with the realities they represent, thus bridging a critical gap in the deployment of interpretable AI systems in practice.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Divyat Mahajan (16 papers)
  2. Chenhao Tan (89 papers)
  3. Amit Sharma (88 papers)
Citations (196)