Data Science Principles for Interpretable and Explainable AI (2405.10552v2)
Abstract: Society's capacity for algorithmic problem-solving has never been greater. Artificial Intelligence is now applied across more domains than ever, a consequence of powerful abstractions, abundant data, and accessible software. As capabilities have expanded, so have risks, with models often deployed without fully understanding their potential impacts. Interpretable and interactive machine learning aims to make complex models more transparent and controllable, enhancing user agency. This review synthesizes key principles from the growing literature in this field. We first introduce precise vocabulary for discussing interpretability, like the distinction between glass box and explainable models. We then explore connections to classical statistical and design principles, like parsimony and the gulfs of interaction. Basic explainability techniques -- including learned embeddings, integrated gradients, and concept bottlenecks -- are illustrated with a simple case study. We also review criteria for objectively evaluating interpretability approaches. Throughout, we underscore the importance of considering audience goals when designing interactive data-driven systems. Finally, we outline open challenges and discuss the potential role of data science in addressing them. Code to reproduce all examples can be found at https://go.wisc.edu/3k1ewe.
- Sanity checks for saliency maps. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18, 9525–9536. Curran Associates Inc., Red Hook, NY, USA.
- Design principles for visual communication. Commun. ACM, 54(4): 60–69.
- Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI ’21. ACM.
- Bengio Y (2009). Learning Deep Architectures for AI. NOW.
- Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8): 1798–1828.
- Impossibility theorems for feature attribution. Proceedings of the National Academy of Sciences, 121(2).
- On the opportunities and risks of foundation models.
- What makes a visualization memorable? IEEE Transactions on Visualization and Computer Graphics, 19(12): 2306–2315.
- Proxy tasks and subjective measures can be misleading in evaluating explainable ai systems. In: Proceedings of the 25th International Conference on Intelligent User Interfaces, IUI ’20. ACM.
- Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’15, 1721–1730. Association for Computing Machinery, New York, NY, USA.
- Red Teaming Deep Neural Networks with Feature Synthesis Tools. In: Advances in Neural Information Processing Systems (A Oh, T Naumann, A Globerson, K Saenko, M Hardt, S Levine, eds.), volume 36, 80470–80516. Curran Associates, Inc.
- History Aware Multimodal Transformer for Vision-and-Language Navigation. In: Advances in Neural Information Processing Systems (M Ranzato, A Beygelzimer, Y Dauphin, PS Liang, JW Vaughan, eds.), volume 34, 5834–5847. Curran Associates, Inc.
- Cleveland WS (1993). Visualizing Data. Hobart Press.
- Visualizing and measuring the geometry of BERT. Curran Associates Inc., Red Hook, NY, USA.
- What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods. In: Advances in Neural Information Processing Systems (S Koyejo, S Mohamed, A Agarwal, D Belgrave, K Cho, A Oh, eds.), volume 35, 2832–2845. Curran Associates, Inc.
- Explaining Latent Representations with a Corpus of Examples. In: Advances in Neural Information Processing Systems (M Ranzato, A Beygelzimer, Y Dauphin, PS Liang, JW Vaughan, eds.), volume 34, 12154–12166. Curran Associates, Inc.
- BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (J Burstein, C Doran, T Solorio, eds.), 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota.
- Cooperative learning for multiview analysis. Proceedings of the National Academy of Sciences, 119(38).
- Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research, 11(19): 625–660.
- Plenoxels: Radiance fields without neural networks. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE.
- Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1).
- Friedman JH (2001). Greedy function approximation: A gradient boosting machine. The Annals of Statistics, 29(5).
- Towards automatic concept-based explanations. Curran Associates Inc., Red Hook, NY, USA.
- Lactobacillus-deficient cervicovaginal bacterial communities are associated with increased hiv acquisition in young south african women. Immunity, 46(1): 29–37.
- Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access, 7: 47230–47244.
- The tree ensemble layer: differentiability meets conditional computation. In: Proceedings of the 37th International Conference on Machine Learning, ICML’20. JMLR.org.
- Heer J (2019). Agency plus automation: Designing artificial intelligence into interactive systems. Proceedings of the National Academy of Sciences, 116(6): 1844–1850.
- FunnyBirds: A synthetic vision dataset for a part-based analysis of explainable AI methods. In: 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE.
- Hewitt J, Manning CD (2019). A structural probe for finding syntax in word representations. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (J Burstein, C Doran, T Solorio, eds.), 4129–4138. Association for Computational Linguistics, Minneapolis, Minnesota.
- Holmes S (2017). Statistical proof? the problem of irreproducibility. Bulletin of the American Mathematical Society, 55(1): 31–55.
- A Benchmark for Interpretability Methods in Deep Neural Networks. In: Advances in Neural Information Processing Systems (H Wallach, H Larochelle, A Beygelzimer, Fd Alché-Buc, E Fox, R Garnett, eds.), volume 32. Curran Associates, Inc.
- Huber PJ (1964). Robust estimation of a location parameter. The Annals of Mathematical Statistics, 35(1): 73–101.
- Direct manipulation interfaces. Human–Computer Interaction, 1(4): 311–338.
- How Can I Explain This to You? An Empirical Study of Deep Neural Network Explanation Methods. In: Advances in Neural Information Processing Systems, volume 33, 4211–4222. Curran Associates, Inc.
- Kim B (2022). Beyond Interpretability: Developing a Language to Shape Our Relationships with AI.
- Koh PW, Liang P (2017). Understanding black-box predictions via influence functions. In: Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, 1885–1894. JMLR.org.
- Concept bottleneck models.
- The dynamics of the human infant gut microbiome in development and in progression toward type 1 diabetes. Cell Host and Microbe, 17(2): 260–273.
- Krishnan M (2019). Against interpretability: a critical examination of the interpretability problem in machine learning. Philosophy and Technology, 33(3): 487–502.
- Kundaliya D (2023). Computing - incisive media: Google ai chatbot bard gives wrong answer in its first demo. Computing. Nom - OpenAI; Copyright - Copyright Newstex Feb 9, 2023; Dernière mise à jour - 2023-11-30.
- Lee JS (2021). Transformers: a Primer.
- Lipton ZC (2018). The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3): 31–57.
- Loh W (2014). Fifty years of classification and regression trees. International Statistical Review, 82(3): 329–348.
- Lundberg SM, Lee SI (2017). A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, 4768–4777. Curran Associates Inc., Red Hook, NY, USA.
- OpenHEXAI: An open-source framework for human-centered evaluation of explainable machine learning.
- Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences, 116(44): 22071–22080.
- Multimodal deep learning. In: Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML’11, 689–696. Omnipress, Madison, WI, USA.
- Nguyen LH, Holmes S (2019). Ten quick tips for effective dimensionality reduction. PLOS Computational Biology, 15(6): e1006907.
- Public attitudes value interpretability but prioritize accuracy in artificial intelligence. Nature Communications, 13(1).
- Oppermann M, Munzner T (2022). Vizsnippets: Compressing visualization bundles into representative previews for browsing visualization collections. IEEE Transactions on Visualization and Computer Graphics, 28(1): 747–757.
- Do Vision Transformers See Like Convolutional Neural Networks? In: Advances in Neural Information Processing Systems (M Ranzato, A Beygelzimer, Y Dauphin, PS Liang, JW Vaughan, eds.), volume 34, 12116–12128. Curran Associates, Inc.
- Transfusion: understanding transfer learning for medical imaging. Curran Associates Inc., Red Hook, NY, USA.
- Recht B (2023). All models are wrong, but some are dangerous.
- “Why should I trust you?”: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, 1135–1144. Association for Computing Machinery, New York, NY, USA.
- Rudin C (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5): 206–215.
- Sankaran K, Holmes SP (2023). Generative models: An interdisciplinary perspective. Annual Review of Statistics and Its Application, 10(1): 325–352.
- Design study methodology: Reflections from the trenches and the stacks. IEEE Transactions on Visualization and Computer Graphics, 18(12): 2431–2440.
- Grad-cam: Visual explanations from deep networks via gradient-based localization. In: 2017 IEEE International Conference on Computer Vision (ICCV), 618–626.
- Deep inside convolutional networks: Visualising image classification models and saliency maps. In: Workshop at International Conference on Learning Representations.
- Visualizing the impact of feature attribution baselines. Distill, 5(1).
- Axiomatic attribution for deep networks. In: Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, 3319–3328. JMLR.org.
- Teh YW (2019). On Statistical Thinking in Deep Learning A Blog Post. IMS Medallion Lecture.
- Tufte ER (2001). The visual display of quantitative information. Graphics Press, Cheshire, CT, 2 edition.
- Tukey JW (1959). A survey of sampling from contaminated distributions. (33): 57.
- gsignal: Signal processing.
- Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, 6000–6010. Curran Associates Inc., Red Hook, NY, USA.
- A fine-grained interpretability evaluation benchmark for neural NLP. In: Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL). Association for Computational Linguistics, Stroudsburg, PA, USA.
- Williamson B, Feng J (2020). Efficient nonparametric statistical inference on population feature importance using shapley values. In: Proceedings of the 37th International Conference on Machine Learning (HD III, A Singh, eds.), volume 119 of Proceedings of Machine Learning Research, 10282–10291. PMLR.
- Visualizing dataflow graphs of deep learning models in tensorflow. IEEE Transactions on Visualization and Computer Graphics, 24: 1–12.
- Polyjuice: Generating counterfactuals for explaining, evaluating, and improving models. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics.
- An explanation of in-context learning as implicit bayesian inference. In: The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
- Understanding neural networks through deep visualization. ArXiv, abs/1506.06579.
- Post-hoc concept bottleneck models. In: The Eleventh International Conference on Learning Representations.
- Zeiler MD, Fergus R (2013). Visualizing and understanding convolutional networks. ArXiv, abs/1311.2901.
- Interpretable classification models for recidivism prediction. Journal of the Royal Statistical Society Series A: Statistics in Society, 180(3): 689–722.
- Segment Everything Everywhere All at Once. In: Advances in Neural Information Processing Systems (A Oh, T Naumann, A Globerson, K Saenko, M Hardt, S Levine, eds.), volume 36, 19769–19782. Curran Associates, Inc.