Tell me more: Intent Fulfilment Framework for Enhancing User Experiences in Conversational XAI (2405.10446v1)
Abstract: The evolution of Explainable Artificial Intelligence (XAI) has emphasised the significance of meeting diverse user needs. The approaches to identifying and addressing these needs must also advance, recognising that explanation experiences are subjective, user-centred processes that interact with users towards a better understanding of AI decision-making. This paper delves into the interrelations in multi-faceted XAI and examines how different types of explanations collaboratively meet users' XAI needs. We introduce the Intent Fulfilment Framework (IFF) for creating explanation experiences. The novelty of this paper lies in recognising the importance of "follow-up" on explanations for obtaining clarity, verification and/or substitution. Moreover, the Explanation Experience Dialogue Model integrates the IFF and "Explanation Followups" to provide users with a conversational interface for exploring their explanation needs, thereby creating explanation experiences. Quantitative and qualitative findings from our comparative user study demonstrate the impact of the IFF in improving user engagement, the utility of the AI system and the overall user experience. Overall, we reinforce the principle that "one explanation does not fit all" to create explanation experiences that guide the complex interaction through conversation.
- Data-centric explanations: Explaining training data of machine learning systems to promote transparency. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI ’21, New York, NY, USA, 2021. Association for Computing Machinery.
- Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information fusion, 58:82–115, 2020.
- One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012, 2019.
- Choices and their consequences-explaining acceptable sets in abstract argumentation frameworks. In KR, pages 110–119, 2021.
- Principles and practice of explainable machine learning. Frontiers in big Data, 4, 2021.
- Explanation ontology: a model of explanations for user-centered ai. In International Semantic Web Conference, pages 228–243. Springer, 2020.
- Interpretable machine learning: Moving from mythos to diagnostics. Communications of the ACM, 65(8):43–50, 2022.
- Human-xai interaction: A review and design principles for explanation user interfaces. In Human-Computer Interaction – INTERACT 2021, pages 619–640, Cham, 2021. Springer International Publishing.
- A taxonomy for human subject evaluation of black-box explanations in xai. In Proceedings of the Workshop on Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies. CEUR Workshop Proceedings, 2020.
- Behavior trees in robotics and AI: An introduction. CRC Press, 2018.
- Demonstrating rigor using thematic analysis: A hybrid approach of inductive and deductive coding and theme development. International journal of qualitative methods, 5(1):80–92, 2006.
- Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA), pages 80–89. IEEE, 2018.
- Convex-ds: A dataset for conversational explanations in recommender systems. In IntRS@ RecSys, pages 3–20, 2021.
- Metrics for explainable ai: Challenges and prospects. arXiv preprint arXiv:1812.04608, 2018.
- ” help me help the ai”: Understanding how explainability can support human-ai interaction. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1–17, 2023.
- Questioning the ai: Informing design practices for explainable ai user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ’20, page 1–15, New York, NY, USA, 2020. Association for Computing Machinery.
- A grounded interaction protocol for explainable artificial intelligence. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pages 1033–1041, 2019.
- Convxai: a system for multimodal interaction with any black-box explainer. Cognitive Computation, 15(2):613–644, 2023.
- Games that agents play: A formal framework for dialogues between autonomous agents. Journal of logic, language and information, 11:315–334, 2002.
- From black boxes to conversations: Incorporating xai in a conversational agent. In World Conference on Explainable Artificial Intelligence, pages 71–96. Springer, 2023.
- Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pages 1–9, 2022.
- Noel Pearse. An illustration of deductive analysis in qualitative research. In 18th European Conference on Research Methodology for Business and Management Studies, page 264, 2019.
- Human-centered xai: Developing design patterns for explanations of clinical decision support systems. International Journal of Human-Computer Studies, 154:102684, 2021.
- Ignore, trust, or negotiate: understanding clinician acceptance of ai-based treatment recommendations in health care. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1–18, 2023.
- One explanation does not fit all. KI-Künstliche Intelligenz, 34(2):235–250, 2020.
- Beyond expertise and roles: A framework to characterize the stakeholders of interpretable machine learning and their needs. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1–16, 2021.
- Exploring and promoting diagnostic transparency and explainability in online symptom checkers. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1–17, 2021.
- Notions of explainability and evaluation approaches for explainable artificial intelligence. Information Fusion, 76:89–106, 2021.
- Anjana Wijekoon (12 papers)
- David Corsar (7 papers)
- Nirmalie Wiratunga (14 papers)
- Kyle Martin (8 papers)
- Pedram Salimi (2 papers)