Incremental XAI: Memorable Understanding of AI with Incremental Explanations (2404.06733v1)
Abstract: Many explainable AI (XAI) techniques strive for interpretability by providing concise salient information, such as sparse linear factors. However, users either only see inaccurate global explanations, or highly-varying local explanations. We propose to provide more detailed explanations by leveraging the human cognitive capacity to accumulate knowledge by incrementally receiving more details. Focusing on linear factor explanations (factors $\times$ values = outcome), we introduce Incremental XAI to automatically partition explanations for general and atypical instances by providing Base + Incremental factors to help users read and remember more faithful explanations. Memorability is improved by reusing base factors and reducing the number of factors shown in atypical cases. In modeling, formative, and summative user studies, we evaluated the faithfulness, memorability and understandability of Incremental XAI against baseline explanation methods. This work contributes towards more usable explanation that users can better ingrain to facilitate intuitive engagement with AI.
- Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–18.
- COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
- Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access 6 (2018), 52138–52160.
- Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion 58 (2020), 82–115.
- On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one 10, 7 (2015), e0130140.
- Beyond accuracy: The role of mental models in human-AI team performance. In Proceedings of the AAAI conference on human computation and crowdsourcing, Vol. 7. 2–11.
- Sparse robust regression for explaining classifiers. In Discovery Science: 22nd International Conference, DS 2019, Split, Croatia, October 28–30, 2019, Proceedings 22. Springer, 351–366.
- SLISEMAP: Supervised dimensionality reduction through local explanations. Machine Learning 112, 1 (2023), 1–43.
- Leo Breiman. 2001. Random forests. Machine learning 45 (2001), 5–32.
- To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1–21.
- Ruth MJ Byrne. 2019. Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning.. In IJCAI. 6276–6282.
- Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 1721–1730.
- Marco Cerliani. 2022. linear-trees. https://github.com/cerlymarco/linear-tree.
- Equi-explanation maps: concise and informative global summary explanations. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 464–472.
- Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In 2016 IEEE symposium on security and privacy (SP). IEEE, 598–617.
- Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017).
- Pierre Dragicevic. 2018. Can we call mean differences “effect sizes”. https://transparentstatistics.org/2018/07/05/meanings-effect-size. Accessed: 2023-12-10.
- Brittle AI, causal confusion, and bad mental models: challenges and successes in the XAI program. arXiv preprint arXiv:2106.05506 (2021).
- A survey of graph edit distance. Pattern Analysis and applications 13 (2010), 113–129.
- Shirley Gregor and Izak Benbasat. 1999. Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS quarterly (1999), 497–530.
- Harlfoxem. 2016. House Sales in King County, USA. https://www.kaggle.com/harlfoxem/housesalesprediction.
- Fritz Heider. 2013. The psychology of interpersonal relations. Psychology Press.
- Heart Disease. UCI Machine Learning Repository. DOI: https://doi.org/10.24432/C52P4X.
- Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory. In 2022 ACM Conference on Fairness, Accountability, and Transparency.
- " Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–17.
- Will you accept an imperfect ai? exploring designs for adjusting end-user expectations of ai systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–14.
- Interacting with predictions: Visual inspection of black-box machine learning models. In Proceedings of the 2016 CHI conference on human factors in computing systems. 5686–5697.
- Human evaluation of models built for interpretability. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7. 59–67.
- Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1675–1684.
- Faithful and customizable explanations of black box models. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 131–138.
- What do we want from Explainable Artificial Intelligence (XAI)?–A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence 296 (2021), 103473.
- Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. (2015).
- Q Vera Liao and Kush R Varshney. 2021. Human-centered explainable ai (xai): From algorithms to user experiences. arXiv preprint arXiv:2110.10790 (2021).
- Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive reasoning on hypotheses. arXiv preprint arXiv:2302.01241 (2023).
- Brian Y Lim and Anind K Dey. 2009. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th international conference on Ubiquitous computing. 195–204.
- Brian Y Lim and Anind K Dey. 2011. Design of an intelligible mobile context-aware application. In Proceedings of the 13th international conference on human computer interaction with mobile devices and services. 157–166.
- Brian Y Lim and Anind K Dey. 2013. Evaluating intelligibility usage and usefulness in a context-aware application. In International Conference on Human-Computer Interaction. Springer, 92–101.
- Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI conference on human factors in computing systems. 2119–2128.
- Tania Lombrozo. 2006. The structure and function of explanations. Trends in cognitive sciences 10, 10 (2006), 464–470.
- Tania Lombrozo. 2007. Simplicity and probability in causal explanation. Cognitive psychology 55, 3 (2007), 232–257.
- Why does my model fail? contrastive local explanations for retail forecasting. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 90–98.
- Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017).
- Imma Sort by Two or More Attributes With Interpretable Monotonic Multi-Attribute Sorting. IEEE Transactions on Visualization and Computer Graphics 27, 4 (2020), 2369–2384.
- Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1–38.
- How do humans understand explanations from machine learning systems? an evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1802.00682 (2018).
- Model agnostic multilevel explanations. Advances in neural information processing systems 33 (2020), 5968–5979.
- On the Importance of User Backgrounds and Impressions: Lessons Learned from Interactive AI Applications. ACM Transactions on Interactive Intelligent Systems 12, 4 (2022), 1–29.
- Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1–52.
- J. Ross Quinlan. 1986. Induction of decision trees. Machine learning 1 (1986), 81–106.
- R. Quinlan. 1993. Auto MPG. UCI Machine Learning Repository. DOI: https://doi.org/10.24432/C5859H.
- " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.
- Anchors: High-precision model-agnostic explanations. In Proceedings of the AAAI conference on artificial intelligence, Vol. 32.
- Data-driven storytelling. CRC Press.
- Jason D Rights and Sonya K Sterba. 2019. Quantifying explained variance in multilevel models: An integrative framework for defining R-squared measures. Psychological methods 24, 3 (2019), 309.
- Interpretable machine learning: Fundamental principles and 10 grand challenges. Statistic Surveys 16 (2022), 1–85.
- Glocalx-from local to global explanations of black box ai models. Artificial Intelligence 294 (2021), 103457.
- Thomas J Shuell. 1986. Cognitive conceptions of learning. Review of educational research 56, 4 (1986), 411–436.
- Aaron Springer and Steve Whittaker. 2020. Progressive disclosure: When, why, and how do users want algorithmic transparency information? ACM Transactions on Interactive Intelligent Systems (TiiS) 10, 4 (2020), 1–32.
- Axiomatic attribution for deep networks. In International conference on machine learning. PMLR, 3319–3328.
- Exploring and promoting diagnostic transparency and explainability in online symptom checkers. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–17.
- Berk Ustun and Cynthia Rudin. 2016. Supersparse linear integer models for optimized medical scoring systems. Machine Learning 102 (2016), 349–391.
- Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech. 31 (2017), 841.
- Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–15.
- Show or suppress? Managing input uncertainty in machine learning model explanations. Artificial Intelligence 294 (2021), 103456.
- Xinru Wang and Ming Yin. 2023. Watch Out for Updates: Understanding the Effects of Model Explanation Updates in AI-Assisted Decision Making. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–19.
- Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation. In CHI Conference on Human Factors in Computing Systems. 1–28.
- Understanding the effect of accuracy on trust in machine learning models. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–12.
- Wencan Zhang and Brian Y Lim. 2022. Towards Relatable Explainable AI with the Perceptual Process. In CHI Conference on Human Factors in Computing Systems. 1–24.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.