Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Calculating and Visualizing Counterfactual Feature Importance Values (2306.06506v1)

Published 10 Jun 2023 in cs.LG, cs.AI, and cs.HC

Abstract: Despite the success of complex machine learning algorithms, mostly justified by an outstanding performance in prediction tasks, their inherent opaque nature still represents a challenge to their responsible application. Counterfactual explanations surged as one potential solution to explain individual decision results. However, two major drawbacks directly impact their usability: (1) the isonomic view of feature changes, in which it is not possible to observe \textit{how much} each modified feature influences the prediction, and (2) the lack of graphical resources to visualize the counterfactual explanation. We introduce Counterfactual Feature (change) Importance (CFI) values as a solution: a way of assigning an importance value to each feature change in a given counterfactual explanation. To calculate these values, we propose two potential CFI methods. One is simple, fast, and has a greedy nature. The other, coined CounterShapley, provides a way to calculate Shapley values between the factual-counterfactual pair. Using these importance values, we additionally introduce three chart types to visualize the counterfactual explanations: (a) the Greedy chart, which shows a greedy sequential path for prediction score increase up to predicted class change, (b) the CounterShapley chart, depicting its respective score in a simple and one-dimensional chart, and finally (c) the Constellation chart, which shows all possible combinations of feature changes, and their impact on the model's prediction score. For each of our proposed CFI methods and visualization schemes, we show how they can provide more information on counterfactual explanations. Finally, an open-source implementation is offered, compatible with any counterfactual explanation generator algorithm. Code repository at: https://github.com/ADMAntwerp/CounterPlots

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. Language models are few-shot learners, Advances in neural information processing systems 33 (2020) 1877–1901.
  2. A template-based approach for speech synthesis intonation generation using lstms., in: INTERSPEECH, 2016, pp. 2463–2467.
  3. Expressive neural voice cloning, in: Asian Conference on Machine Learning, PMLR, 2021, pp. 252–267.
  4. S. H. Yang, M. Chung, Self-imitating feedback generation using gan for computer-assisted pronunciation training, arXiv preprint arXiv:1904.09407 (2019).
  5. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai, Information fusion 58 (2020) 82–115.
  6. Explainable artificial intelligence: objectives, stakeholders, and future research opportunities, Information Systems Management 39 (2022) 53–63.
  7. Counterfactual explanations without opening the black box: Automated decisions and the gdpr, Harv. JL & Tech. 31 (2017) 841.
  8. Precof: Counterfactual explanations for fairness (2022).
  9. T. Speith, A review of taxonomies of explainable artificial intelligence (xai) methods, in: 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 2239–2250.
  10. L. Breiman, Random forests, Machine learning 45 (2001) 5–32.
  11. Decompositional rule extraction from support vector machines by active learning, IEEE Transactions on Knowledge and Data Engineering 21 (2009) 178–191.
  12. ”why should I trust you?”: Explaining the predictions of any classifier, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, 2016, pp. 1135–1144.
  13. S. M. Lundberg, S.-I. Lee, A unified approach to interpreting model predictions, Advances in neural information processing systems 30 (2017).
  14. Counterfactual explanations for machine learning: A review, arXiv preprint arXiv:2010.10596 (2020).
  15. API design for machine learning software: experiences from the scikit-learn project, in: ECML PKDD Workshop: Languages for Data Mining and Machine Learning, 2013, pp. 108–122.
  16. L. Shapley, Quota solutions op n-person games1, Edited by Emil Artin and Marston Morse (1953) 343.
  17. A survey of contrastive and counterfactual explanation generation methods for explainable artificial intelligence, IEEE Access 9 (2021) 11974–12001.
  18. D. Martens, F. Provost, Explaining documents’ classifications, Center for Digital Economy Research (2011).
  19. R. M. B. de Oliveira, D. Martens, A framework and benchmarking study for counterfactual generating methods on tabular data, Applied Sciences 11 (2021) 7274.
  20. Explaining data-driven decisions made by ai systems: The counterfactual approach, arXiv preprint arXiv:2001.07417 (2020).
  21. R. M. Byrne, Precis of the rational imagination: How people create alternatives to reality, Behavioral and Brain Sciences 30 (2007) 439–453.
  22. P. Lipton, Contrastive explanation, Royal Institute of Philosophy Supplements 27 (1990) 247–266.
  23. Explainable image classification with evidence counterfactual, Pattern Analysis and Applications (2022) 1–21.
  24. Understanding consumer preferences for explanations generated by xai algorithms, arXiv preprint arXiv:2107.02624 (2021).
  25. D. Martens, F. Provost, Explaining data-driven document classifications, MIS quarterly 38 (2014) 73–100.
  26. Gamut: A design probe to understand how data scientists understand machine learning models, in: Proceedings of the 2019 CHI conference on human factors in computing systems, 2019, pp. 1–13.
  27. The what-if tool: Interactive probing of machine learning models, IEEE transactions on visualization and computer graphics 26 (2019) 56–65.
  28. Dece: Decision explorer with counterfactual explanations for machine learning models, IEEE Transactions on Visualization and Computer Graphics 27 (2020) 1438–1447.
  29. Vice: visual counterfactual explanations for machine learning models, in: Proceedings of the 25th International Conference on Intelligent User Interfaces, 2020, pp. 531–535.
  30. Explainable artificial intelligence: an analytical review, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 11 (2021) e1424.
  31. E. Winter, The shapley value, Handbook of game theory with economic applications 3 (2002) 2025–2054.
  32. E. Strumbelj, I. Kononenko, An efficient explanation of individual classifications using game theory, The Journal of Machine Learning Research 11 (2010) 1–18.
  33. M. R. Zafar, N. Khan, Deterministic local interpretable model-agnostic explanations for stable explainability, Machine Learning and Knowledge Extraction 3 (2021) 525–541.
  34. Statistical stability indices for lime: Obtaining reliable explanations for machine learning models, Journal of the Operational Research Society 73 (2022) 91–101.
  35. M. R. Zafar, N. M. Khan, Dlime: A deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems, arXiv preprint arXiv:1906.10263 (2019).
  36. E. Štrumbelj, I. Kononenko, Explaining prediction models and individual predictions with feature contributions, Knowledge and information systems 41 (2014) 647–665.
  37. From local explanations to global understanding with explainable AI for trees, Nature Machine Intelligence 2 (2020) 56–67.
  38. On locality of local explanation models, Advances in Neural Information Processing Systems 34 (2021) 18395–18407.
  39. Explaining deep learning models with constrained adversarial examples, in: Pacific Rim international conference on artificial intelligence, Springer, 2019, pp. 43–56.
  40. Explaining machine learning classifiers through diverse counterfactual explanations, in: Proceedings of the 2020 conference on fairness, accountability, and transparency, 2020, pp. 607–617.
  41. A. V. Looveren, J. Klaise, Interpretable counterfactual explanations guided by prototypes, in: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Springer, 2021, pp. 650–665.
  42. S. Ainsworth, A. Th Loizou, The effects of self-explaining when learning with text or diagrams, Cognitive science 27 (2003) 669–681.
  43. Nice: an algorithm for nearest instance counterfactual explanations, arXiv preprint arXiv:2104.07411 (2021).
  44. G. Vilone, L. Longo, Notions of explainability and evaluation approaches for explainable artificial intelligence, Information Fusion 76 (2021) 89–106.
  45. J. D. Hunter, Matplotlib: A 2d graphics environment, Computing in Science & Engineering 9 (2007) 90–95.
  46. Scikit-learn: Machine learning in Python, Journal of Machine Learning Research 12 (2011) 2825–2830.
  47. M. L. Waskom, seaborn: statistical data visualization, Journal of Open Source Software 6 (2021) 3021.
  48. M. Robnik-Šikonja, I. Kononenko, Explaining classifications for individual instances, IEEE Transactions on Knowledge and Data Engineering 20 (2008) 589–600.
  49. Financial ratios and corporate governance indicators in bankruptcy prediction: A comprehensive study, European journal of operational research 252 (2016) 561–572.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
Citations (1)