Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evolutionary approaches to explainable machine learning (2306.14786v1)

Published 23 Jun 2023 in cs.AI, cs.LG, and cs.NE

Abstract: Machine learning models are increasingly being used in critical sectors, but their black-box nature has raised concerns about accountability and trust. The field of explainable artificial intelligence (XAI) or explainable machine learning (XML) has emerged in response to the need for human understanding of these models. Evolutionary computing, as a family of powerful optimization and learning tools, has significant potential to contribute to XAI/XML. In this chapter, we provide a brief introduction to XAI/XML and review various techniques in current use for explaining machine learning models. We then focus on how evolutionary computing can be used in XAI/XML, and review some approaches which incorporate EC techniques. We also discuss some open challenges in XAI/XML and opportunities for future research in this field using EC. Our aim is to demonstrate that evolutionary computing is well-suited for addressing current problems in explainability, and to encourage further exploration of these methods to contribute to the development of more transparent, trustworthy and accountable machine learning models.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (60)
  1. Discovering Interpretable Representations for Both Deep Generative and Discriminative Models. In Proceedings of the 35th International Conference on Machine Learning, pages 50–59. PMLR, July 2018.
  2. Generating natural language adversarial examples. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun’ichi Tsujii, editors, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2890–2896. Association for Computational Linguistics, 2018.
  3. Focus! Rating XAI methods and finding biases. In IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2022, Padua, Italy, July 18-23, 2022, pages 1–8. IEEE, 2022.
  4. The intersection of evolutionary computation and explainable AI. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO ’22, pages 1757–1762, New York, NY, USA, July 2022. Association for Computing Machinery.
  5. On the opportunities and risks of foundation models. CoRR, abs/2108.07258, 2021.
  6. Leo Breiman. Random Forest. Machine Learning, 45:5–32, 2001.
  7. Model compression. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’06, pages 535–541, New York, NY, USA, August 2006. Association for Computing Machinery.
  8. Multi-objective genetic programming for feature extraction and data visualization. Soft Computing, 21(8):2069–2089, April 2017.
  9. Multi-Objective Counterfactual Explanations. In Thomas Bäck, Mike Preuss, André Deutz, Hao Wang, Carola Doerr, Michael Emmerich, and Heike Trautmann, editors, Parallel Problem Solving from Nature – PPSN XVI, Lecture Notes in Computer Science, pages 448–469, Cham, 2020. Springer International Publishing.
  10. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2):182–197, 2002.
  11. Explainable AI (XAI): Core Ideas, Techniques, and Solutions. ACM Computing Surveys, 55(9):194:1–194:33, January 2023.
  12. What’s inside the black box? a genetic programming method for interpreting complex machine learning models. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), pages 1012–1020, 2019.
  13. Applying genetic programming to improve interpretability in machine learning models. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), pages 1–8, 2020.
  14. Local Rule-Based Explanations of Black Box Decision Systems. CoRR, abs/1805.10820, 2018.
  15. A survey on feature selection approaches for clustering. Artificial Intelligence Review, 53(6):4519–4545, 2020.
  16. The Elements of Statistical Learning. Springer, 2001.
  17. Distilling the knowledge in a neural network. CoRR, abs/1503.02531, 2015.
  18. H. Hotelling. Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 24:417–441, 1933.
  19. Ting Hu. Can genetic programming perform explainable machine learning for bioinformatics? In Genetic Programming Theory and Practice XVII. Springer, 2020.
  20. An evolutionary learning and network approach to identifying key metabolites for osteoarthritis. PLoS Computational Biology, 14(3):e1005986, 2018.
  21. SAFARI: Versatile and efficient evaluations for robustness of interpretability. CoRR, abs/2208.09418, 2022.
  22. Multi-objective Genetic Programming for Visual Analytics. In Sara Silva, James A. Foster, Miguel Nicolau, Penousal Machado, and Mario Giacobini, editors, Genetic Programming, Lecture Notes in Computer Science, pages 322–334, Berlin, Heidelberg, 2011. Springer.
  23. A survey of algorithmic recourse: Definitions, formulations, solutions, and prospects. CoRR, abs/2010.04050, 2020.
  24. Determinantal point processes for machine learning. Found. Trends Mach. Learn., 5(2-3):123–286, 2012.
  25. Learning feature spaces for regression with genetic programming. Genetic Programming and Evolvable Machines, 21(3):433–467, September 2020.
  26. Interpretable & explorable approximations of black box models. CoRR, abs/1707.01154, 2017.
  27. Genetic programming for evolving a front of interpretable models for data visualization. IEEE Transactions on Cybernetics, 51(11):5468–5482, 2021.
  28. Mixed integer evolution strategies for parameter optimization. Evolutionary Computation, 21(1):29–64, 2013.
  29. Toward high accuracy and visualization: An interpretable feature extraction method based on genetic programming and non-overlap degree. In Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 299–304, 2020.
  30. Zachary C. Lipton. The mythos of model interpretability. Commun. ACM, 61(10):36–43, 2018.
  31. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NeurIPS), pages 4768–4777, 2017.
  32. Visualizing Deep Convolutional Neural Networks Using Natural Pre-images. International Journal of Computer Vision, 120(3):233–255, December 2016.
  33. Explainable Artificial Intelligence by Genetic Programming: A Survey. IEEE Transactions on Evolutionary Computation, pages 1–1, 2022.
  34. Christoph Molnar. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. leanpub.com, 2022.
  35. Explaining machine learning classifiers through diverse counterfactual examples. In ACM Conference on Fairness, Accountability, and Transparency, January 2020.
  36. M. Muharram and G.D. Smith. Evolutionary constructive induction. IEEE Transactions on Knowledge and Data Engineering, 17(11):1518–1528, November 2005.
  37. Interpretable machine learning: definitions, methods, and applications. Proceedings of the National Academy of Sciences, 116(44):22071–22080, 2019.
  38. A survey on swarm intelligence approaches to feature selection in data mining. Swarm and Evolutionary Computation, 54:100663, May 2020.
  39. Karl Pearson. LIII. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2(11):559–572, November 1901.
  40. “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1135–1144, 2016.
  41. Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1:206–215, 2019.
  42. Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Systems, 263:110273, March 2023.
  43. Explaining deep neural networks and beyond: A review of methods and applications. Proc. IEEE, 109(3):247–278, 2021.
  44. A Nested Genetic Algorithm for feature selection in high-dimensional cancer Microarray datasets. Expert Systems with Applications, 121:233–243, May 2019.
  45. GeCo: Quality counterfactual explanations in real time. Proceedings of the VLDB Endowment, 14(9):1681–1693, May 2021.
  46. Using Genetic Programming to Find Functional Mappings for UMAP Embeddings. In 2021 IEEE Congress on Evolutionary Computation (CEC), pages 704–711, June 2021.
  47. Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nature Medicine, 27(12):2176–2182, 2021.
  48. SMILE: systems metabolomics using interpretable learning and evolution. BMC Bioinformatics, 22:284, 2021.
  49. Feature Selection for Polygenic Risk Scores using Genetic Algorithm and Network Science. In 2021 IEEE Congress on Evolutionary Computation (CEC), pages 802–808, June 2021.
  50. CERTIFAI: A Common Framework to Provide Explanations and Analyse the Fairness and Robustness of Black-box Models. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pages 166–172, New York NY USA, February 2020. ACM.
  51. One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput., 23(5):828–841, 2019.
  52. Foiling explanations in deep neural networks. CoRR, abs/2211.14860, 2022.
  53. On genetic programming representations and fitness functions for interpretable dimensionality reduction. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’22, pages 458–466, New York, NY, USA, July 2022. Association for Computing Machinery.
  54. Laurens van der Maaten and Geoffrey Hinton. Visualizing Data using t-SNE. Journal of Machine Learning Research, 9(86):2579–2605, 2008.
  55. On explaining machine learning models by evolving crucial and compact features. Swarm and Evolutionary Computation, 53:100640, March 2020.
  56. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech., 31:841, 2017.
  57. A Multi-objective Genetic Algorithm to Evolving Local Interpretable Model-agnostic Explanations for Deep Neural Networks in Image Classification. IEEE Transactions on Evolutionary Computation, pages 1–1, 2022.
  58. A Survey on Evolutionary Computation Approaches to Feature Selection. IEEE Transactions on Evolutionary Computation, 20(4):606–626, August 2016.
  59. Multi-Objective Feature Selection With Missing Data in Classification. IEEE Transactions on Emerging Topics in Computational Intelligence, 6(2):355–364, April 2022.
  60. Self-adaptive particle swarm optimization for large-scale feature selection in classification. ACM Trans. Knowl. Discov. Data, 13(5):50:1–50:27, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Ryan Zhou (2 papers)
  2. Ting Hu (23 papers)
Citations (7)