Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Introducing User Feedback-based Counterfactual Explanations (UFCE) (2403.00011v1)

Published 26 Feb 2024 in cs.LG, cs.AI, and cs.HC

Abstract: Machine learning models are widely used in real-world applications. However, their complexity makes it often challenging to interpret the rationale behind their decisions. Counterfactual explanations (CEs) have emerged as a viable solution for generating comprehensible explanations in eXplainable Artificial Intelligence (XAI). CE provides actionable information to users on how to achieve the desired outcome with minimal modifications to the input. However, current CE algorithms usually operate within the entire feature space when optimizing changes to turn over an undesired outcome, overlooking the identification of key contributors to the outcome and disregarding the practicality of the suggested changes. In this study, we introduce a novel methodology, that is named as user feedback-based counterfactual explanation (UFCE), which addresses these limitations and aims to bolster confidence in the provided explanations. UFCE allows for the inclusion of user constraints to determine the smallest modifications in the subset of actionable features while considering feature dependence, and evaluates the practicality of suggested changes using benchmark evaluation metrics. We conducted three experiments with five datasets, demonstrating that UFCE outperforms two well-known CE methods in terms of \textit{proximity}, \textit{sparsity}, and \textit{feasibility}. Reported results indicate that user constraints influence the generation of feasible CEs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (54)
  1. Explainable ai model for recognizing financial crisis roots based on pigeon optimization and gradient boosting model. International Journal of Computational Intelligence Systems, 16(1):50, 2023.
  2. In hiring, algorithms beat instinct. Harvard business review, 92(5):32–32, 2014. doi:10.1007/s11055-020-00914-1.
  3. An empirical investigation of the right to explanation under GDPR in insurance. In International Conference on Trust and Privacy in Digital Business, pages 125–139. Springer, 2020.
  4. Too much, too little, or just right? ways explanations impact end users’ mental models. In 2013 IEEE Symposium on visual languages and human centric computing, pages 3–10. IEEE, 2013.
  5. Kieron O’Hara. Explainable AI and the philosophy and practice of explanation. Computer Law & Security Review, 39:105474, 2020.
  6. Investigating human-centered perspectives in explainable artificial intelligence. In CEUR WORKSHOP PROCEEDINGS, volume 3518, pages 47–66, 2023.
  7. DARPA’s explainable AI (XAI) program: A retrospective, 2021.
  8. Effect: Explainable framework for meta-learning in automatic classification algorithm selection. Information Sciences, 622:211–234, 2023.
  9. A survey of contrastive and counterfactual explanation generation methods for explainable artificial intelligence. IEEE Access, 9:11974–12001, 2021a. doi:10.1007/s11055-020-00914-1.
  10. Explainability of artificial intelligence methods, applications and challenges: A comprehensive survey. Information Sciences, 2022.
  11. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access, 6:52138–52160, 2018. doi:10.1007/s11055-020-00914-1.
  12. Factual and counterfactual explanation of fuzzy information granules. In Interpretable Artificial Intelligence: A Perspective of Granular Computing, pages 153–185. Springer, 2021b.
  13. Why should I trust you? explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144, 2016. doi:10.1145/2939672.2939778.
  14. Many-objective optimization of feature selection based on two-level particle cooperation. Information Sciences, 532:91–109, 2020.
  15. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, page 4768–4777, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964.
  16. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech., 31:841, 2017.
  17. The Book of Why: The New Science of Cause and Effect. Basic Books, Inc., USA, 1st edition, 2018. ISBN 046509760X.
  18. Paul Voigt and Axel Von dem Bussche. The EU General Data Protection Regulation (GDPR). A Practical Guide, 1st Ed., Cham: Springer International Publishing, 10(3152676):1–383, 2017.
  19. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 607–617, 2020a. doi:10.1145/3351095.3372850.
  20. A survey of algorithmic recourse: Contrastive explanations and consequential recommendations. ACM Comput. Surv., 55(5):1–29, 2022. ISSN 0360-0300. doi:10.1145/3527848. URL https://doi.org/10.1145/3527848.
  21. Experimental study on generating multi-modal explanations of black-box classifiers in terms of gray-box classifiers. In 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pages 1–8, 2020. doi:10.1109/FUZZ48607.2020.9177770.
  22. Explanation sets: A general framework for machine learning explainability. Information Sciences, 617:464–481, 2022.
  23. FCE: Feedback based counterfactual explanations for explainable ai. IEEE Access, 10:72363–72372, 2022a. doi:10.1109/ACCESS.2022.3189432.
  24. How to build self-explaining fuzzy systems: From interpretability to explainability [ai-explained]. IEEE Computational Intelligence Magazine, 19(1):81–82, 2024. doi:10.1109/MCI.2023.3328098.
  25. Explainable artificial intelligence (XAI): What we know and what is left to attain trustworthy artificial intelligence. Information Fusion, page 101805, 2023. ISSN 1566-2535. doi:https://doi.org/10.1016/j.inffus.2023.101805. URL https://www.sciencedirect.com/science/article/pii/S1566253523001148.
  26. Explainable AI methods-a brief overview. In International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers, pages 13–38. Springer, 2022.
  27. Actionable recourse in linear classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT’19, page 10–19, New York, USA, 2019. Association for Computing Machinery. ISBN 9781450361255. doi:10.1145/3287560.3287566.
  28. Measurable counterfactual local explanations for any classifier. Frontiers in Artificial Intelligence and Applications, 325:2529–2535, August 2020. doi:10.3233/FAIA200387.
  29. Comparison-based inverse classification for interpretability in machine learning. In International conference on information processing and management of uncertainty in knowledge-based systems, pages 100–111. Springer, 2018. doi:10.1007/978-3-319-91473-2_9.
  30. Factual and counterfactual explanations for black box decision making. IEEE Intelligent Systems, 34(6):14–23, 2019.
  31. An empirical study on how humans appreciate automated counterfactual explanations which embrace imprecise information. Information Sciences, 618:379–399, 2022.
  32. Face: feasible and actionable counterfactual explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pages 344–350, 2020.
  33. Nearest-neighbor methods in learning and vision. IEEE Transaction on Neural Networks, 19(2):377, 2008.
  34. On the robustness of sparse counterfactual explanations to adverse perturbations. Artificial Intelligence, 316:103840, 2023.
  35. LOF: Identifying density-based local outliers. In Proceedings of ACM SIGMOD international conference on management of data, pages 93–104, 2000. doi:10.1145/335191.335388.
  36. A meta analysis study of outlier detection methods in classification. Technical paper, Department of Mathematics, University of Puerto Rico at Mayaguez, 1:25, 2004.
  37. Multi-objective counterfactual explanations. In Parallel Problem Solving from Nature–PPSN XVI: 16th International Conference, PPSN 2020, Leiden, The Netherlands, September 5-9, 2020, Proceedings, Part I, pages 448–469. Springer, 2020.
  38. One explanation does not fit all. KI-Künstliche Intelligenz, 34(2):235–250, 2020.
  39. Operationalizing human-centered perspectives in explainable ai. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1–6, 2021.
  40. Towards human cognition level-based experiment design for counterfactual explanations. In Mohammad Ali Jinnah University International Conference on Computing (MAJICC), pages 1–5, 2022b. doi:10.1109/MAJICC56935.2022.9994203.
  41. S Shyam Sundar. Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). Journal of Computer-Mediated Communication, 25(1):74–88, 2020.
  42. CARLA: a python library to benchmark algorithmic recourse and counterfactual explanation algorithms. arXiv preprint arXiv:2108.00783, 2021.
  43. Discern: discovering counterfactual explanations using relevance features from neighbourhoods. In 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), pages 1466–1473. IEEE, 2021.
  44. Claude Elwood Shannon. A mathematical theory of communication. ACM SIGMOBILE mobile computing and communications review, 5(1):3–55, 2001.
  45. Andrey Kolmogorov. On the shannon theory of information transmission in the case of continuous signals. IRE Transactions on Information Theory, 2(4):102–108, 1956.
  46. Erratum: estimating mutual information [Phys. Rev. E 69, 066138 (2004)]. Physical Review E, 83(1):019903, 2011.
  47. Denis J Hilton. Conversational processes and causal explanation. Psychological Bulletin, 107(1):65, 1990.
  48. Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(4):e1312, 2019.
  49. Explaining causality of node (non-) participation in network communities. Information Sciences, 621:354–370, 2023.
  50. The privacy issue of counterfactual explanations: explanation linkage attacks. arXiv preprint arXiv:2210.12051, 2022.
  51. Lormika: Local rule-based model interpretability with k-optimal associations. Information Sciences, 540:221–241, 2020.
  52. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 607–617, 2020b.
  53. KEEL data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. Journal of Multiple-Valued Logic & Soft Computing, 17, 2011.
  54. Winnie F Taylor. Meeting the equal credit opportunity act’s specificity requirement: Judgmental and statistical scoring systems. Buff. L. Rev., 29:73, 1980.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Muhammad Suffian (4 papers)
  2. Jose M. Alonso-Moral (3 papers)
  3. Alessandro Bogliolo (8 papers)