Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 154 tok/s
Gemini 2.5 Pro 40 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 21 tok/s Pro
GPT-4o 93 tok/s Pro
Kimi K2 170 tok/s Pro
GPT OSS 120B 411 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Incremental XAI: Memorable Understanding of AI with Incremental Explanations (2404.06733v1)

Published 10 Apr 2024 in cs.HC and cs.AI

Abstract: Many explainable AI (XAI) techniques strive for interpretability by providing concise salient information, such as sparse linear factors. However, users either only see inaccurate global explanations, or highly-varying local explanations. We propose to provide more detailed explanations by leveraging the human cognitive capacity to accumulate knowledge by incrementally receiving more details. Focusing on linear factor explanations (factors $\times$ values = outcome), we introduce Incremental XAI to automatically partition explanations for general and atypical instances by providing Base + Incremental factors to help users read and remember more faithful explanations. Memorability is improved by reusing base factors and reducing the number of factors shown in atypical cases. In modeling, formative, and summative user studies, we evaluated the faithfulness, memorability and understandability of Incremental XAI against baseline explanation methods. This work contributes towards more usable explanation that users can better ingrain to facilitate intuitive engagement with AI.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (68)
  1. Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–18.
  2. COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
  3. Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access 6 (2018), 52138–52160.
  4. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion 58 (2020), 82–115.
  5. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one 10, 7 (2015), e0130140.
  6. Beyond accuracy: The role of mental models in human-AI team performance. In Proceedings of the AAAI conference on human computation and crowdsourcing, Vol. 7. 2–11.
  7. Sparse robust regression for explaining classifiers. In Discovery Science: 22nd International Conference, DS 2019, Split, Croatia, October 28–30, 2019, Proceedings 22. Springer, 351–366.
  8. SLISEMAP: Supervised dimensionality reduction through local explanations. Machine Learning 112, 1 (2023), 1–43.
  9. Leo Breiman. 2001. Random forests. Machine learning 45 (2001), 5–32.
  10. To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1–21.
  11. Ruth MJ Byrne. 2019. Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning.. In IJCAI. 6276–6282.
  12. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 1721–1730.
  13. Marco Cerliani. 2022. linear-trees. https://github.com/cerlymarco/linear-tree.
  14. Equi-explanation maps: concise and informative global summary explanations. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 464–472.
  15. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In 2016 IEEE symposium on security and privacy (SP). IEEE, 598–617.
  16. Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017).
  17. Pierre Dragicevic. 2018. Can we call mean differences “effect sizes”. https://transparentstatistics.org/2018/07/05/meanings-effect-size. Accessed: 2023-12-10.
  18. Brittle AI, causal confusion, and bad mental models: challenges and successes in the XAI program. arXiv preprint arXiv:2106.05506 (2021).
  19. A survey of graph edit distance. Pattern Analysis and applications 13 (2010), 113–129.
  20. Shirley Gregor and Izak Benbasat. 1999. Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS quarterly (1999), 497–530.
  21. Harlfoxem. 2016. House Sales in King County, USA. https://www.kaggle.com/harlfoxem/housesalesprediction.
  22. Fritz Heider. 2013. The psychology of interpersonal relations. Psychology Press.
  23. Heart Disease. UCI Machine Learning Repository. DOI: https://doi.org/10.24432/C52P4X.
  24. Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory. In 2022 ACM Conference on Fairness, Accountability, and Transparency.
  25. " Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–17.
  26. Will you accept an imperfect ai? exploring designs for adjusting end-user expectations of ai systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–14.
  27. Interacting with predictions: Visual inspection of black-box machine learning models. In Proceedings of the 2016 CHI conference on human factors in computing systems. 5686–5697.
  28. Human evaluation of models built for interpretability. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7. 59–67.
  29. Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1675–1684.
  30. Faithful and customizable explanations of black box models. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 131–138.
  31. What do we want from Explainable Artificial Intelligence (XAI)?–A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence 296 (2021), 103473.
  32. Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. (2015).
  33. Q Vera Liao and Kush R Varshney. 2021. Human-centered explainable ai (xai): From algorithms to user experiences. arXiv preprint arXiv:2110.10790 (2021).
  34. Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive reasoning on hypotheses. arXiv preprint arXiv:2302.01241 (2023).
  35. Brian Y Lim and Anind K Dey. 2009. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th international conference on Ubiquitous computing. 195–204.
  36. Brian Y Lim and Anind K Dey. 2011. Design of an intelligible mobile context-aware application. In Proceedings of the 13th international conference on human computer interaction with mobile devices and services. 157–166.
  37. Brian Y Lim and Anind K Dey. 2013. Evaluating intelligibility usage and usefulness in a context-aware application. In International Conference on Human-Computer Interaction. Springer, 92–101.
  38. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI conference on human factors in computing systems. 2119–2128.
  39. Tania Lombrozo. 2006. The structure and function of explanations. Trends in cognitive sciences 10, 10 (2006), 464–470.
  40. Tania Lombrozo. 2007. Simplicity and probability in causal explanation. Cognitive psychology 55, 3 (2007), 232–257.
  41. Why does my model fail? contrastive local explanations for retail forecasting. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 90–98.
  42. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017).
  43. Imma Sort by Two or More Attributes With Interpretable Monotonic Multi-Attribute Sorting. IEEE Transactions on Visualization and Computer Graphics 27, 4 (2020), 2369–2384.
  44. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1–38.
  45. How do humans understand explanations from machine learning systems? an evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1802.00682 (2018).
  46. Model agnostic multilevel explanations. Advances in neural information processing systems 33 (2020), 5968–5979.
  47. On the Importance of User Backgrounds and Impressions: Lessons Learned from Interactive AI Applications. ACM Transactions on Interactive Intelligent Systems 12, 4 (2022), 1–29.
  48. Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1–52.
  49. J. Ross Quinlan. 1986. Induction of decision trees. Machine learning 1 (1986), 81–106.
  50. R. Quinlan. 1993. Auto MPG. UCI Machine Learning Repository. DOI: https://doi.org/10.24432/C5859H.
  51. " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.
  52. Anchors: High-precision model-agnostic explanations. In Proceedings of the AAAI conference on artificial intelligence, Vol. 32.
  53. Data-driven storytelling. CRC Press.
  54. Jason D Rights and Sonya K Sterba. 2019. Quantifying explained variance in multilevel models: An integrative framework for defining R-squared measures. Psychological methods 24, 3 (2019), 309.
  55. Interpretable machine learning: Fundamental principles and 10 grand challenges. Statistic Surveys 16 (2022), 1–85.
  56. Glocalx-from local to global explanations of black box ai models. Artificial Intelligence 294 (2021), 103457.
  57. Thomas J Shuell. 1986. Cognitive conceptions of learning. Review of educational research 56, 4 (1986), 411–436.
  58. Aaron Springer and Steve Whittaker. 2020. Progressive disclosure: When, why, and how do users want algorithmic transparency information? ACM Transactions on Interactive Intelligent Systems (TiiS) 10, 4 (2020), 1–32.
  59. Axiomatic attribution for deep networks. In International conference on machine learning. PMLR, 3319–3328.
  60. Exploring and promoting diagnostic transparency and explainability in online symptom checkers. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–17.
  61. Berk Ustun and Cynthia Rudin. 2016. Supersparse linear integer models for optimized medical scoring systems. Machine Learning 102 (2016), 349–391.
  62. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech. 31 (2017), 841.
  63. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–15.
  64. Show or suppress? Managing input uncertainty in machine learning model explanations. Artificial Intelligence 294 (2021), 103456.
  65. Xinru Wang and Ming Yin. 2023. Watch Out for Updates: Understanding the Effects of Model Explanation Updates in AI-Assisted Decision Making. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–19.
  66. Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation. In CHI Conference on Human Factors in Computing Systems. 1–28.
  67. Understanding the effect of accuracy on trust in machine learning models. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–12.
  68. Wencan Zhang and Brian Y Lim. 2022. Towards Relatable Explainable AI with the Perceptual Process. In CHI Conference on Human Factors in Computing Systems. 1–24.

Summary

  • The paper introduces Incremental XAI as a novel method that incrementally discloses base and detailed factors to optimize user comprehension.
  • It develops a tree-based framework with linear model trees and factor sparsity regularization to balance detail with simplicity.
  • Empirical results show improved faithfulness, usability, and memorability compared to traditional global and local explanation approaches.

Incremental XAI: Enhancing User Comprehension Through Gradual Explanation

Introduction

Explainable AI (XAI) represents a vital bridge between advanced AI methods and their users. However, the complexity and variety of these methods often act as a barrier to widespread understanding and adoption. Typical strategies for simplifying explanations, such as using sparse linear models, frequently fall short, either by oversimplifying the model's operations or overwhelming users with too much information at once. Addressing these issues, this paper introduces Incremental XAI, a novel approach that aims to bolster users' understanding by providing explanations that gradually increase in detail.

Framework Overview

Incremental XAI is rooted in pedagogical principles, advocating for a stepwise learning process. It builds on the realization that the full complexity of AI models can often be daunting for users. Instead of offering broad, generalized explanations (Global) or highly specific, instance-based details (Local), Incremental XAI proposes a middle path; it segments models into 'typical' and 'atypical' cases, explaining these with a base set of factors and then incrementally adding details for the complex scenarios.

This method introduces two core concepts: Base and Incremental factors. Initially, users are presented with Base factors that apply to a majority of instances, offering a scaffold for their understanding. When detailed explanation is necessary, additional Incremental factors are revealed, enhancing the explanation's fidelity to the actual AI model's workings without departing from the foundational understanding established by the Base factors.

Implementation and Evaluation

The paper exemplifies this concept through the development of a tree-based incremental explanation framework using linear model trees. This includes additive factors and factor sparsity regularization, optimized to minimize cognitive load by retaining a compact set of factors for users to comprehend.

The empirical evaluation quantifies the Incremental XAI against traditional explanations methods (Global, Subglobal, and Local) across several factors: faithfulness, usability, understandability, and memorability. Results demonstrate that Incremental XAI offers a balanced approach, providing high memorability similar to Global explanations and maintaining a level of detail and faithfulness approaching that of Local explanations.

Theoretical and Practical Implications

Theoretical

The Incremental XAI paradigm challenges current XAI research norms by suggesting a more dynamic, user-centric approach to explanations. It emphasizes the progression of user understanding over time, integrating concepts from cognitive psychology, such as the limits of working memory and the process of schema formation. This aligns with recent calls within the HCI and AI communities for explanations to support user learning and adaptability, rather than merely serving as static descriptions of AI behavior.

Practical

From a practical standpoint, this research underscores the importance of considering the end-user's cognitive process when designing explanation systems. By applying Incremental XAI, developers can create more accessible and effective tools for a range of users, from novices to experts across various domains. The methodology provides a structured approach to simplify the introduction of complex models, potentially enhancing user trust and facilitating broader adoption of AI technologies.

Future Directions

Although the framework presents a promising direction, its application has been primarily demonstrated with linear models. Future work could explore extending the incremental approach to other types of models or explanation formats, such as graphical or rule-based explanations. Additionally, long-term user studies could provide deeper insights into how Incremental XAI supports the learning process over time and its impact on user trust and decision-making.

Conclusion

Incremental XAI represents a significant step forward in the field of explainable AI. By acknowledging and addressing the cognitive challenges users face when interacting with complex AI systems, this approach offers a promising pathway to demystify AI decision-making processes, ultimately enhancing transparency, trust, and user empowerment.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 10 likes.

Upgrade to Pro to view all of the tweets about this paper: