Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Directive Explanations: Crafting Explainable AI Systems for Actionable Human-AI Interactions (2401.04118v2)

Published 29 Dec 2023 in cs.HC

Abstract: With AI becoming ubiquitous in every application domain, the need for explanations is paramount to enhance transparency and trust among non-technical users. Despite the potential shown by Explainable AI (XAI) for enhancing understanding of complex AI systems, most XAI methods are designed for technical AI experts rather than non-technical consumers. Consequently, such explanations are overwhelmingly complex and seldom guide users in achieving their desired predicted outcomes. This paper presents ongoing research for crafting XAI systems tailored to guide users in achieving desired outcomes through improved human-AI interactions. This paper highlights the research objectives and methods, key takeaways and implications learned from user studies. It outlines open questions and challenges for enhanced human-AI collaboration, which the author aims to address in future work.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (54)
  1. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–18. https://doi.org/10.1145/3173574.3174156
  2. Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access 6 (2018), 52138–52160.
  3. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion 99 (2023), 101805. https://doi.org/10.1016/j.inffus.2023.101805
  4. Power to the People: The Role of Humans in Interactive Machine Learning. AI Magazine 35, 4 (Dec. 2014), 105–120. https://doi.org/10.1609/aimag.v35i4.2513
  5. Ariful Islam Anik and Andrea Bunt. 2021. Data-Centric Explanations: Explaining Training Data of Machine Learning Systems to Promote Transparency. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–13. https://doi.org/10.1145/3411764.3445736
  6. Aditya Bhattacharya. 2022. Applied Machine Learning Explainability Techniques. In Applied Machine Learning Explainability Techniques. Packt Publishing, Birmingham, UK. https://www.packtpub.com/product/applied-machine-learning-explainability-techniques/9781803246154
  7. Directive Explanations for Monitoring the Risk of Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If Explorations. In Proceedings of the 28th International Conference on Intelligent User Interfaces (Sydney, NSW, Australia) (IUI ’23). Association for Computing Machinery, New York, NY, USA, 204–219. https://doi.org/10.1145/3581641.3584075
  8. Lessons Learned from EXMOS User Studies: A Technical Report Summarizing Key Takeaways from User Studies Conducted to Evaluate The EXMOS Platform. arXiv:2310.02063 [cs.LG]
  9. Contextualization and Exploration of Local Feature Importance Explanations to Improve Understanding and Satisfaction of Non-Expert Users. In 27th International Conference on Intelligent User Interfaces. ACM, Helsinki Finland, 807–819. https://doi.org/10.1145/3490099.3511139
  10. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv:1802.07228 [cs.AI]
  11. Explainable Machine Learning in Credit Risk Management. Computational Economics 57 (01 2021). https://doi.org/10.1007/s10614-020-10042-0
  12. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-Day Readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Sydney, NSW, Australia) (KDD ’15). Association for Computing Machinery, New York, NY, USA, 1721–1730. https://doi.org/10.1145/2783258.2788613
  13. Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 7639 (Feb. 2017), 115–118.
  14. Gerald Fahner. 2018. Developing Transparent Credit Risk Scorecards More Effectively: An Explainable Artificial Intelligence Approach.
  15. Jerry Alan Fails and Dan R. Olsen. 2003. Interactive Machine Learning. In Proceedings of the 8th International Conference on Intelligent User Interfaces (Miami, Florida, USA) (IUI ’03). Association for Computing Machinery, New York, NY, USA, 39–45. https://doi.org/10.1145/604045.604056
  16. Fair AI. Business & information systems engineering 62, 4 (2020), 379–384.
  17. A Survey of Methods for Explaining Black Box Models. ACM Comput. Surv. 51, 5, Article 93 (aug 2018), 42 pages. https://doi.org/10.1145/3236009
  18. Andreas Holzinger. 2016. Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Informatics 3, 2 (2016), 119–131. https://doi.org/10.1007/s40708-016-0042-6
  19. Explainable Artificial Intelligence Approaches: A Survey. (2021). https://doi.org/10.48550/ARXIV.2101.09429
  20. Effects of personal characteristics in control-oriented user interfaces for music recommender systems. User Modeling and User-Adapted Interaction 30 (04 2020). https://doi.org/10.1007/s11257-019-09247-2
  21. Michael I. Jordan. 2019. Artificial Intelligence—The Revolution Hasn’t Happened Yet. Harvard Data Science Review 1, 1 (jul 1 2019). https://hdsr.mitpress.mit.edu/pub/wot7mkc1.
  22. Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models. , 5686-5697 pages. https://doi.org/10.1145/2858036.2858529
  23. Principles of Explanatory Debugging to Personalize Interactive Machine Learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces. ACM, Atlanta Georgia USA, 126–137. https://doi.org/10.1145/2678025.2701399
  24. Explanatory Debugging: Supporting End-User Debugging of Machine-Learned Programs. In 2010 IEEE Symposium on Visual Languages and Human-Centric Computing. IEEE, Leganes, Madrid, Spain, 41–48. https://doi.org/10.1109/VLHCC.2010.15
  25. Rethinking Explainability as a Dialogue: A Practitioner’s Perspective. arXiv:2202.01875 [cs.LG]
  26. Effects of Explainable Artificial Intelligence on trust and human behavior in a high-risk decision task. Computers in Human Behavior 139 (2023), 107539. https://doi.org/10.1016/j.chb.2022.107539
  27. Q. Vera Liao and Kush R. Varshney. 2022. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. arXiv:2110.10790 [cs.AI]
  28. Why and Why Not Explanations Improve the Intelligibility of Context-Aware Intelligent Systems (CHI ’09). Association for Computing Machinery, New York, NY, USA, 2119–2128. https://doi.org/10.1145/1518701.1519023
  29. Scott Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. arXiv:1705.07874 [cs.AI]
  30. Winston Maxwell and Bruno Dumas. 2023. Meaningful XAI Based on User-Centric Design Methodology. arXiv:2308.13228 [cs.HC]
  31. DataPerf: Benchmarks for Data-Centric AI Development. arXiv:2207.10062 [cs.LG]
  32. A Survey on Bias and Fairness in Machine Learning. ACM Comput. Surv. 54, 6, Article 115 (jul 2021), 35 pages. https://doi.org/10.1145/3457607
  33. Tim Miller. 2017. Explanation in Artificial Intelligence: Insights from the Social Sciences. https://doi.org/10.48550/ARXIV.1706.07269
  34. From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI. ACM Comput. Surv. 55, 13s, Article 295 (jul 2023), 42 pages. https://doi.org/10.1145/3583558
  35. Applications of artificial intelligence in engineering and manufacturing: a systematic review. J Intell Manuf 33 (2022), 1581–1601. https://doi.org/10.1007/s10845-021-01771-6
  36. Incorporating Explainable Artificial Intelligence (XAI) to aid the Understanding of Machine Learning in the Healthcare Domain.. In AICS. 169–180.
  37. ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. arXiv:1602.04938 [cs.LG]
  38. Exploring the What-If-Tool as a solution for machine learning explainability in clinical practice. Invest. Ophthalmol. Vis. Sci. 2021;62(8):79. (2021).
  39. Marwa Sallam. 2023. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare (Basel) 11, 6 (19 3 2023), 887. https://doi.org/10.3390/healthcare11060887
  40. Making deep neural networks right for the right scientific reasons by interacting with their explanations. Nature Machine Intelligence 2 (08 2020), 476–486. https://doi.org/10.1038/s42256-020-0212-3
  41. Directive Explanations for Actionable Explainability in Machine Learning Applications. (2021). https://doi.org/10.48550/ARXIV.2102.02671
  42. Explaining machine learning models with interactive natural language conversations using TalkToModel. Nature Machine Intelligence (27 Jul 2023). https://doi.org/10.1038/s42256-023-00692-8
  43. Eduardo Soares and Plamen Angelov. 2019. Fair-by-design explainable models for prediction of recidivism. arXiv:1910.02043 [stat.ML]
  44. Alexander Spangher and Berk Ustun. 2018. Actionable Recourse in Linear Classification. https://doi.org/10.1145/3287560.3287566
  45. explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning. (07 2019).
  46. Interacting meaningfully with machine learning systems: Three experiments. Int. Journal of Human-Computer Studies 67, 8 (2009), 639–662.
  47. Investigating Explainability of Generative AI for Code through Scenario-Based Design. In 27th International Conference on Intelligent User Interfaces (Helsinki, Finland) (IUI ’22). Association for Computing Machinery, New York, NY, USA, 212–228. https://doi.org/10.1145/3490099.3511119
  48. Leveraging Explanations in Interactive Machine Learning: An Overview. http://arxiv.org/abs/2207.14526 arXiv:2207.14526 [cs].
  49. Stefano Teso and Kristian Kersting. 2019. Explanatory Interactive Machine Learning. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (Honolulu, HI, USA) (AIES ’19). Association for Computing Machinery, New York, NY, USA, 239–245. https://doi.org/10.1145/3306618.3314293
  50. In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction. arXiv:2005.04176 [stat.ML]
  51. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–15. https://doi.org/10.1145/3290605.3300831
  52. The What-If Tool: Interactive Probing of Machine Learning Models. IEEE Transactions on Visualization and Computer Graphics PP (08 2019), 1–1. https://doi.org/10.1109/TVCG.2019.2934619
  53. Interpretable Classification Models for Recidivism Prediction. Journal of the Royal Statistical Society Series A: Statistics in Society 180, 3 (09 2016), 689–722. https://doi.org/10.1111/rssa.12227 arXiv:https://academic.oup.com/jrsssa/article-pdf/180/3/689/49430770/jrsssa_180_3_689.pdf
  54. Explainability for Large Language Models: A Survey. arXiv:2309.01029 [cs.CL]
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Aditya Bhattacharya (12 papers)
Citations (4)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets