EVOTER: Evolution of Transparent Explainable Rule-sets (2204.10438v5)
Abstract: Most AI systems are black boxes generating reasonable outputs for given inputs. Some domains, however, have explainability and trustworthiness requirements that cannot be directly met by these approaches. Various methods have therefore been developed to interpret black-box models after training. This paper advocates an alternative approach where the models are transparent and explainable to begin with. This approach, EVOTER, evolves rule-sets based on simple logical expressions. The approach is evaluated in several prediction/classification and prescription/policy search domains with and without a surrogate. It is shown to discover meaningful rule sets that perform similarly to black-box models. The rules can provide insight into the domain, and make biases hidden in the data explicit. It may also be possible to edit them directly to remove biases and add constraints. EVOTER thus forms a promising foundation for building trustworthy AI systems for real-world applications in the future.
- Amina Adadi and Mohammed Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 6 (2018), 52138–52160.
- H. Akaike. 1974. A new look at the statistical model identification. IEEE Trans. Automat. Control 19, 6 (1974), 716–723.
- Multiple Hypothesis Testing: A Review. Journal of the Indian Society of Agricultural Statistics 68 (01 2014), 303–314.
- Ligiia Igorevna Avdeeva. 1966. Simultaneous statistical inference. Springer.
- Vaishak Belle and Ioannis Papantonis. 2021. Principles and Practice of Explainable Machine Learning. Frontiers in Big Data 4 (2021).
- Benchmarking and survey of explanation methods for black box models. Data Mining and Knowledge Discovery 37 (2021), 1719–1778. https://api.semanticscholar.org/CorpusID:232046272
- OpenAI Gym. arXiv:1606.01540 (2016).
- Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv:2303.12712 (2023).
- Davide Chicco and Giuseppe Jurman. 2020. Machine learning can predict survival of patients with heart failure from serum creatinine and ejection fraction alone. BMC medical informatics and decision making 20, 1 (2020), 1–16.
- A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 6, 2 (2002), 182–197. https://doi.org/10.1109/4235.996017
- Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. arXiv:1702:08608 (2017).
- Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml
- PaLM 2 Technical Report. arXiv:2305.10403 (2023).
- Effective reinforcement learning through evolutionary surrogate-assisted prescription. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference. 814–822.
- Faustino Gomez and Risto Miikkulainen. 1997. Incremental evolution of complex general behavior. Adaptive Behavior 5, 3-4 (1997), 317–342.
- David Gunning and David Aha. 2019. DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Magazine 40, 2 (Jun. 2019), 44–58.
- Learning Decision Lists with Lags for Physiological Time Series. In Third Workshop on Data Mining for Medicine and Healthcare, at the 14th SIAM International Conference on Data Mining. 82–87.
- Babak Hodjat and Hormoz Shahrzad. 2013. Introducing an Age-Varying Fitness Estimation Function. 59–71. https://doi.org/10.1007/978-1-4614-6846-2_5
- PRETSL: Distributed Probabilistic Rule Evolution for Time-Series Classification. In Genetic Programming Theory and Practice XIV. Springer, 139–148.
- Conception of a dominance-based multi-objective local search in the context of classification rule mining in large and imbalanced data sets. Applied Soft Computing 34 (2015), 705–720.
- An evaluation of the human-interpretability of explanation. arXiv:1902.00006 (2019).
- Faithful and Customizable Explanations of Black Box Models. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (Honolulu, HI, USA) (AIES ’19). Association for Computing Machinery, New York, NY, USA, 131–138.
- Explainable ai: A review of machine learning interpretability methods. Entropy 23, 1 (2020), 18.
- Scott M. Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems. Curran Associates Inc.
- Risto Miikkulainen. 2023. Evolutionary Supervised Machine Learning. In Handbook of Evolutionary Machine Learning, W. Banzhaf, P. Machado, and M. Zhang (Eds.). Springer, New York.
- From Prediction to Prescription: Evolutionary Optimization of Non-Pharmaceutical Interventions in the COVID-19 Pandemic. IEEE Transactions on Evolutionary Computation 25 (2021), 386–401.
- How to Select a Winner in Evolutionary Optimization?. In Proceedings of the IEEE Symposium Series in Computational Intelligence. IEEE. http://www.cs.utexas.edu/users/ai-lab?miikkulainen:ssci17
- Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1–38.
- From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI. Comput. Surveys 55 (2022), 1 – 42. https://api.semanticscholar.org/CorpusID:246063780
- Jaume Bacardit Peñarroya. 2004. Pittsburgh genetic-based machine learning in the data mining era: representations, generalization, and run-time.
- Incident Management for Explainable and Automated Root Cause Analysis in Cloud Data Centers. Journal of Universal Computer Science 27, 11 (2021), 1152–1173.
- ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (San Francisco, California, USA) (KDD ’16). Association for Computing Machinery, New York, NY, USA, 1135–1144.
- Michael Schmidt and Hod Lipson. 2009. Distilling Free-Form Natural Laws from Experimental Data. Science 324, 5923 (2009), 81–85.
- Gesina Schwalbe and Bettina Finzel. 2021. A comprehensive taxonomy for explainable artificial intelligence: A systematic survey of surveys on methods and concepts. Data Mining and Knowledge Discovery (2021), 1–59.
- G. Schwarz. 1978. Estimating the dimension of a model. The Annals of Statistics 6, 2 (1978), 461–464.
- Enhanced Optimization with Composite Objectives and Novelty Selection. In Proceedings of the 2018 Conference on Artificial Life. Tokyo, Japan. http://www.cs.utexas.edu/users/ai-lab?shahrzad:alife18
- Enhanced Optimization with Composite Objectives and Novelty Pulsation. In Genetic Programming Theory and Practice XVII, Wolfgang Banzhaf, Erik Goodman, Leigh Sheneman, Leonardo Trujillo, and Bill Worzel (Eds.). Springer, New York, 275–293.
- Estimating the Advantage of Age-Layering in Evolutionary Algorithms. 693–699. https://doi.org/10.1145/2908812.2908911
- Sujatha Srinivasan and Sivakumar Ramakrishnan. 2011. Evolutionary multi objective optimization for rule mining: A review. Artificial Intelligence Review 36 (10 2011), 205–248. https://doi.org/10.1007/s10462-011-9212-3
- Impact of HbA1c Measurement on Hospital Readmission Rates: Analysis of 70,000 Clinical Database Patient Records. BioMed Research International 2014 (2014), 781670.
- Norman Tasfi. 2016. PyGame Learning Environment. https://github.com/ntasfi/PyGame-Learning-Environment.
- Erico Tjoa and Cuntai Guan. 2021. A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. IEEE Transactions on Neural Networks and Learning Systems 32 (2021), 4793–4813.
- LLaMA: Open and Efficient Foundation Language Models. arXiv:2302.13971 (2023).
- Extracting relational explanations from deep neural networks: A survey from a neural-symbolic perspective. IEEE transactions on neural networks and learning systems 31, 9 (2019), 3456–3470.
- Hormoz Shahrzad (10 papers)
- Babak Hodjat (11 papers)
- Risto Miikkulainen (59 papers)