GLOBE-CE: A Translation-Based Approach for Global Counterfactual Explanations (2305.17021v2)
Abstract: Counterfactual explanations have been widely studied in explainability, with a range of application dependent methods prominent in fairness, recourse and model understanding. The major shortcoming associated with these methods, however, is their inability to provide explanations beyond the local or instance-level. While many works touch upon the notion of a global explanation, typically suggesting to aggregate masses of local explanations in the hope of ascertaining global properties, few provide frameworks that are both reliable and computationally tractable. Meanwhile, practitioners are requesting more efficient and interactive explainability tools. We take this opportunity to propose Global & Efficient Counterfactual Explanations (GLOBE-CE), a flexible framework that tackles the reliability and scalability issues associated with current state-of-the-art, particularly on higher dimensional datasets and in the presence of continuous features. Furthermore, we provide a unique mathematical analysis of categorical feature translations, utilising it in our method. Experimental evaluation with publicly available datasets and user studies demonstrate that GLOBE-CE performs significantly better than the current state-of-the-art across multiple metrics (e.g., speed, reliability).
- Fast Algorithms for Mining Association Rules in Large Databases. In Proceedings of the International Conference on Very Large Data Bases, VLDB ’94, pp. 487–499, San Francisco, USA, September 1994. Morgan Kaufmann Publishers Inc. ISBN 1558601538. URL https://dl.acm.org/doi/10.5555/645920.672836.
- The Hidden Assumptions Behind Counterfactual Explanations and Principal Reasons. In Proceedings of the Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, January 2020. doi: 10.1145/3351095.3372830. URL https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3503019. arXiv: 1912.04930.
- A Step Towards Global Counterfactual Explanations: Approximating the Feature Space Through Hierarchical Division and Graph Search. In Advances in Artificial Intelligence and Machine Learning, 2021. URL https://publikationen.bibliothek.kit.edu/1000139219.
- An Interpretable Model with Globally Consistent Explanations for Credit Risk. In NeurIPS Workshop on Challenges and Opportunities for AI in Financial Services: the Impact of Fairness, Explainability, Accuracy, and Privacy, Montréal, Canada, December 2018. URL https://arxiv.org/abs/1811.12615.
- Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives. In Advances in Neural Information Processing Systems, Montréal, Canada, December 2018. URL https://proceedings.neurips.cc/paper/2018/file/c5ff2543b53f4cc0ad3819a36752467b-Paper.pdf.
- On the Adversarial Robustness of Causal Algorithmic Recourse. arXiv:2112.11313 [cs], December 2021. URL http://arxiv.org/abs/2112.11313. arXiv: 2112.11313.
- UCI Machine Learning Repository, 2019. URL http://archive.ics.uci.edu/ml.
- Robust counterfactual explanations for tree-based ensembles. In Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., and Sabato, S. (eds.), International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 5742–5756, Baltimore, USA, 17–23 Jul 2022. PMLR. URL https://proceedings.mlr.press/v162/dutta22a/dutta22a.pdf.
- FICO. Explainable Machine Learning Challenge, 2018. URL https://community.fico.com/s/explainable-machine-learning-challenge.
- Learning Groupwise Explanations for Black-Box Models. In Zhou, Z.-H. (ed.), Proceedings of the International Joint Conference on Artificial Intelligence, IJCAI-21, pp. 2396–2402, Virtual-Only, August 2021. International Joint Conferences on Artificial Intelligence Organization. URL https://doi.org/10.24963/ijcai.2021/330.
- Equalizing Recourse across Groups, 2019. URL https://arxiv.org/abs/1909.03166.
- DACE: Distribution-Aware Counterfactual Explanation by Mixed-Integer Linear Optimization. In Proceedings of the International Joint Conference on Artificial Intelligence, pp. 2855–2862, Yokohama, Japan, July 2020. International Joint Conferences on Artificial Intelligence Organization. ISBN 978-0-9992411-6-5. doi: 10.24963/ijcai.2020/395. URL https://www.ijcai.org/proceedings/2020/395.
- Counterfactual Explanation Trees: Transparent and Consistent Actionable Recourse with Decision Trees. In Camps-Valls, G., Ruiz, F. J. R., and Valera, I. (eds.), Proceedings of The International Conference on Artificial Intelligence and Statistics, volume 151 of Proceedings of Machine Learning Research, pp. 1846–1870, Virtual-Only, March 2022. PMLR. URL https://proceedings.mlr.press/v151/kanamori22a.html.
- A Survey of Algorithmic Recourse: Definitions, Formulations, Solutions, and Prospects. In NeurIPS Workshop on ML Retrospectives, Surveys & Meta-Analyses (ML-RSA), Virtual-Only, December 2020. URL https://ml-retrospectives.github.io/neurips2020/camera_ready/8.pdf. arXiv: 2010.04050.
- Faithful and Customizable Explanations of Black Box Models. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES ’19, pp. 131–138, Honolulu, USA, January 2019. Association for Computing Machinery. ISBN 9781450363242. URL https://doi.org/10.1145/3306618.3314229.
- Rethinking Explainability as a Dialogue: A Practitioner’s Perspective, 2022. URL https://arxiv.org/abs/2202.01875.
- How We Analyzed the COMPAS Recidivism Algorithm, May 2016. URL https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.
- Non-Monotone Submodular Maximization under Matroid and Knapsack Constraints. In Proceedings of the Annual ACM Symposium on Theory of Computing, Maryland, USA, May 2009. URL https://dl.acm.org/doi/10.1145/1536414.1536459. arXiv: 0902.0353.
- Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates. In AAAI Conference on Artificial Intelligence, Virtual-Only, February 2022. URL http://arxiv.org/abs/2112.02646.
- FOCUS: Flexible Optimizable Counterfactual Explanations for Tree Ensembles. In AAAI Conference on Artificial Intelligence, Virtual-Only, February 2022. URL http://arxiv.org/abs/1911.12199. arXiv: 1911.12199.
- Explainable AI for Trees: From Local Explanations to Global Understanding. Nature Machine Intelligence, January 2020. URL https://www.nature.com/articles/s42256-019-0138-9.
- Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers. In NeurIPS Workshop on Do the Right Thing: Machine Learning and Causal Inference for Improved Decision Making, Vancouver, Canada, December 2019. URL http://arxiv.org/abs/1912.03277. arXiv: 1912.03277.
- A survey on the robustness of feature importance and counterfactual explanations, 2021. URL https://arxiv.org/abs/2111.00358.
- Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 607–617, Barcelona, Spain, January 2020. doi: 10.1145/3351095.3372850. URL https://dl.acm.org/doi/abs/10.1145/3351095.3372850. arXiv: 1905.07697.
- Optimal Counterfactual Explanations in Tree Ensembles. In International Conference on Machine Learning, pp. 8422–8431, Virtual-Only, July 2021. URL http://proceedings.mlr.press/v139/parmentier21a/parmentier21a.pdf. arXiv: 2106.06631.
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms. In Advances in Neural Information Processing Systems (Datasets & Benchmarks Track), August 2021. URL https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/b53b3a3d6ab90ce0268229151c9bde11-Paper-round1.pdf. arXiv: 2108.00783.
- Algorithmic Recourse in the Face of Noisy Human Responses. arXiv:2203.06768 [cs], March 2022. URL http://arxiv.org/abs/2203.06768. arXiv: 2203.06768.
- Meaningful Explanations of Black Box AI Decision Systems. In AAAI Conference on Artificial Intelligence, volume 33, pp. 9780–9784, Honolulu, USA, January 2019. URL https://ojs.aaai.org/index.php/AAAI/article/view/5050.
- Explaining Groups of Points in Low-Dimensional Representations. In III, H. D. and Singh, A. (eds.), Proceedings of the International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 7762–7771, Virtual-Only, July 2020. PMLR. URL http://proceedings.mlr.press/v119/plumb20a/plumb20a.pdf.
- FACE: Feasible and Actionable Counterfactual Explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 344–350, New York, USA, February 2020. doi: 10.1145/3375627.3375850. URL https://dl.acm.org/doi/10.1145/3375627.3375850. arXiv: 1909.09369.
- Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. F., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 12187–12198, Virtual-Only, December 2020. Curran Associates, Inc. URL https://proceedings.neurips.cc/paper/2020/file/8ee7730e97c67473a424ccfeff49ab20-Paper.pdf.
- Rudin, C. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nature Machine Intelligence, May 2019. URL https://www.nature.com/articles/s42256-019-0048-x.
- Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From an Explainable AI Competition. Harvard Data Science Review, November 2019. doi: 10.1162/99608f92.5a8a3a3d. URL https://hdsr.mitpress.mit.edu/pub/f9kuryi8.
- Counterfactual Explanations Can Be Manipulated. In Advances in Neural Information Processing Systems, volume 34, Virtual-Only, December 2021. URL https://proceedings.neurips.cc/paper/2021/file/009c434cab57de48a31f6b669e7ba266-Paper.pdf.
- A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence. In IEEE Access, volume 9, pp. 11974–12001, 2021. doi: 10.1109/ACCESS.2021.3051315. URL https://ieeexplore.ieee.org/document/9321372.
- Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 465–474, Novia Scotia, Canada, August 2017. URL https://dl.acm.org/doi/10.1145/3097983.3098039.
- Towards Robust and Reliable Algorithmic Recourse, July 2021. URL http://arxiv.org/abs/2102.13620. arXiv: 2102.13620.
- Actionable Recourse in Linear Classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 10–19, Atlanta, USA, January 2019. doi: 10.1145/3287560.3287566. URL https://dl.acm.org/doi/10.1145/3287560.3287566. arXiv: 1809.06514.
- Interpretable Counterfactual Explanations Guided by Prototypes. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 650–665, Bilbao, Spain, September 2021. Springer. URL https://2021.ecmlpkdd.org/wp-content/uploads/2021/07/sub_352.pdf.
- The Philosophical Basis of Algorithmic Recourse. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 284–293, Barcelona, Spain, January 2020. ACM. ISBN 978-1-4503-6936-7. doi: 10.1145/3351095.3372876. URL https://dl.acm.org/doi/10.1145/3351095.3372876.
- Counterfactual Explanations for Machine Learning: A Review, October 2020. URL http://arxiv.org/abs/2010.10596.
- Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology, 2018. ISSN 1556-5068. doi: 10.2139/ssrn.3063289. URL https://www.ssrn.com/abstract=3063289. arXiv: 1711.00399.
- The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. Expert Systems with Applications, 36(2, Part 1):2473–2480, 2009. ISSN 0957-4174. doi: https://doi.org/10.1016/j.eswa.2007.12.020. URL https://www.sciencedirect.com/science/article/pii/S0957417407006719.
- Dan Ley (11 papers)
- Saumitra Mishra (19 papers)
- Daniele Magazzeni (42 papers)