Towards Directive Explanations: Crafting Explainable AI Systems for Actionable Human-AI Interactions (2401.04118v2)
Abstract: With AI becoming ubiquitous in every application domain, the need for explanations is paramount to enhance transparency and trust among non-technical users. Despite the potential shown by Explainable AI (XAI) for enhancing understanding of complex AI systems, most XAI methods are designed for technical AI experts rather than non-technical consumers. Consequently, such explanations are overwhelmingly complex and seldom guide users in achieving their desired predicted outcomes. This paper presents ongoing research for crafting XAI systems tailored to guide users in achieving desired outcomes through improved human-AI interactions. This paper highlights the research objectives and methods, key takeaways and implications learned from user studies. It outlines open questions and challenges for enhanced human-AI collaboration, which the author aims to address in future work.
- Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–18. https://doi.org/10.1145/3173574.3174156
- Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access 6 (2018), 52138–52160.
- Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion 99 (2023), 101805. https://doi.org/10.1016/j.inffus.2023.101805
- Power to the People: The Role of Humans in Interactive Machine Learning. AI Magazine 35, 4 (Dec. 2014), 105–120. https://doi.org/10.1609/aimag.v35i4.2513
- Ariful Islam Anik and Andrea Bunt. 2021. Data-Centric Explanations: Explaining Training Data of Machine Learning Systems to Promote Transparency. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–13. https://doi.org/10.1145/3411764.3445736
- Aditya Bhattacharya. 2022. Applied Machine Learning Explainability Techniques. In Applied Machine Learning Explainability Techniques. Packt Publishing, Birmingham, UK. https://www.packtpub.com/product/applied-machine-learning-explainability-techniques/9781803246154
- Directive Explanations for Monitoring the Risk of Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If Explorations. In Proceedings of the 28th International Conference on Intelligent User Interfaces (Sydney, NSW, Australia) (IUI ’23). Association for Computing Machinery, New York, NY, USA, 204–219. https://doi.org/10.1145/3581641.3584075
- Lessons Learned from EXMOS User Studies: A Technical Report Summarizing Key Takeaways from User Studies Conducted to Evaluate The EXMOS Platform. arXiv:2310.02063 [cs.LG]
- Contextualization and Exploration of Local Feature Importance Explanations to Improve Understanding and Satisfaction of Non-Expert Users. In 27th International Conference on Intelligent User Interfaces. ACM, Helsinki Finland, 807–819. https://doi.org/10.1145/3490099.3511139
- The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv:1802.07228 [cs.AI]
- Explainable Machine Learning in Credit Risk Management. Computational Economics 57 (01 2021). https://doi.org/10.1007/s10614-020-10042-0
- Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-Day Readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Sydney, NSW, Australia) (KDD ’15). Association for Computing Machinery, New York, NY, USA, 1721–1730. https://doi.org/10.1145/2783258.2788613
- Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 7639 (Feb. 2017), 115–118.
- Gerald Fahner. 2018. Developing Transparent Credit Risk Scorecards More Effectively: An Explainable Artificial Intelligence Approach.
- Jerry Alan Fails and Dan R. Olsen. 2003. Interactive Machine Learning. In Proceedings of the 8th International Conference on Intelligent User Interfaces (Miami, Florida, USA) (IUI ’03). Association for Computing Machinery, New York, NY, USA, 39–45. https://doi.org/10.1145/604045.604056
- Fair AI. Business & information systems engineering 62, 4 (2020), 379–384.
- A Survey of Methods for Explaining Black Box Models. ACM Comput. Surv. 51, 5, Article 93 (aug 2018), 42 pages. https://doi.org/10.1145/3236009
- Andreas Holzinger. 2016. Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Informatics 3, 2 (2016), 119–131. https://doi.org/10.1007/s40708-016-0042-6
- Explainable Artificial Intelligence Approaches: A Survey. (2021). https://doi.org/10.48550/ARXIV.2101.09429
- Effects of personal characteristics in control-oriented user interfaces for music recommender systems. User Modeling and User-Adapted Interaction 30 (04 2020). https://doi.org/10.1007/s11257-019-09247-2
- Michael I. Jordan. 2019. Artificial Intelligence—The Revolution Hasn’t Happened Yet. Harvard Data Science Review 1, 1 (jul 1 2019). https://hdsr.mitpress.mit.edu/pub/wot7mkc1.
- Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models. , 5686-5697 pages. https://doi.org/10.1145/2858036.2858529
- Principles of Explanatory Debugging to Personalize Interactive Machine Learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces. ACM, Atlanta Georgia USA, 126–137. https://doi.org/10.1145/2678025.2701399
- Explanatory Debugging: Supporting End-User Debugging of Machine-Learned Programs. In 2010 IEEE Symposium on Visual Languages and Human-Centric Computing. IEEE, Leganes, Madrid, Spain, 41–48. https://doi.org/10.1109/VLHCC.2010.15
- Rethinking Explainability as a Dialogue: A Practitioner’s Perspective. arXiv:2202.01875 [cs.LG]
- Effects of Explainable Artificial Intelligence on trust and human behavior in a high-risk decision task. Computers in Human Behavior 139 (2023), 107539. https://doi.org/10.1016/j.chb.2022.107539
- Q. Vera Liao and Kush R. Varshney. 2022. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. arXiv:2110.10790 [cs.AI]
- Why and Why Not Explanations Improve the Intelligibility of Context-Aware Intelligent Systems (CHI ’09). Association for Computing Machinery, New York, NY, USA, 2119–2128. https://doi.org/10.1145/1518701.1519023
- Scott Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. arXiv:1705.07874 [cs.AI]
- Winston Maxwell and Bruno Dumas. 2023. Meaningful XAI Based on User-Centric Design Methodology. arXiv:2308.13228 [cs.HC]
- DataPerf: Benchmarks for Data-Centric AI Development. arXiv:2207.10062 [cs.LG]
- A Survey on Bias and Fairness in Machine Learning. ACM Comput. Surv. 54, 6, Article 115 (jul 2021), 35 pages. https://doi.org/10.1145/3457607
- Tim Miller. 2017. Explanation in Artificial Intelligence: Insights from the Social Sciences. https://doi.org/10.48550/ARXIV.1706.07269
- From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI. ACM Comput. Surv. 55, 13s, Article 295 (jul 2023), 42 pages. https://doi.org/10.1145/3583558
- Applications of artificial intelligence in engineering and manufacturing: a systematic review. J Intell Manuf 33 (2022), 1581–1601. https://doi.org/10.1007/s10845-021-01771-6
- Incorporating Explainable Artificial Intelligence (XAI) to aid the Understanding of Machine Learning in the Healthcare Domain.. In AICS. 169–180.
- ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. arXiv:1602.04938 [cs.LG]
- Exploring the What-If-Tool as a solution for machine learning explainability in clinical practice. Invest. Ophthalmol. Vis. Sci. 2021;62(8):79. (2021).
- Marwa Sallam. 2023. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare (Basel) 11, 6 (19 3 2023), 887. https://doi.org/10.3390/healthcare11060887
- Making deep neural networks right for the right scientific reasons by interacting with their explanations. Nature Machine Intelligence 2 (08 2020), 476–486. https://doi.org/10.1038/s42256-020-0212-3
- Directive Explanations for Actionable Explainability in Machine Learning Applications. (2021). https://doi.org/10.48550/ARXIV.2102.02671
- Explaining machine learning models with interactive natural language conversations using TalkToModel. Nature Machine Intelligence (27 Jul 2023). https://doi.org/10.1038/s42256-023-00692-8
- Eduardo Soares and Plamen Angelov. 2019. Fair-by-design explainable models for prediction of recidivism. arXiv:1910.02043 [stat.ML]
- Alexander Spangher and Berk Ustun. 2018. Actionable Recourse in Linear Classification. https://doi.org/10.1145/3287560.3287566
- explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning. (07 2019).
- Interacting meaningfully with machine learning systems: Three experiments. Int. Journal of Human-Computer Studies 67, 8 (2009), 639–662.
- Investigating Explainability of Generative AI for Code through Scenario-Based Design. In 27th International Conference on Intelligent User Interfaces (Helsinki, Finland) (IUI ’22). Association for Computing Machinery, New York, NY, USA, 212–228. https://doi.org/10.1145/3490099.3511119
- Leveraging Explanations in Interactive Machine Learning: An Overview. http://arxiv.org/abs/2207.14526 arXiv:2207.14526 [cs].
- Stefano Teso and Kristian Kersting. 2019. Explanatory Interactive Machine Learning. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (Honolulu, HI, USA) (AIES ’19). Association for Computing Machinery, New York, NY, USA, 239–245. https://doi.org/10.1145/3306618.3314293
- In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction. arXiv:2005.04176 [stat.ML]
- Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–15. https://doi.org/10.1145/3290605.3300831
- The What-If Tool: Interactive Probing of Machine Learning Models. IEEE Transactions on Visualization and Computer Graphics PP (08 2019), 1–1. https://doi.org/10.1109/TVCG.2019.2934619
- Interpretable Classification Models for Recidivism Prediction. Journal of the Royal Statistical Society Series A: Statistics in Society 180, 3 (09 2016), 689–722. https://doi.org/10.1111/rssa.12227 arXiv:https://academic.oup.com/jrsssa/article-pdf/180/3/689/49430770/jrsssa_180_3_689.pdf
- Explainability for Large Language Models: A Survey. arXiv:2309.01029 [cs.CL]
- Aditya Bhattacharya (12 papers)