Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Utilising Explanations to Mitigate Robot Conversational Failures (2307.04462v1)

Published 10 Jul 2023 in cs.HC and cs.RO

Abstract: This paper presents an overview of robot failure detection work from HRI and adjacent fields using failures as an opportunity to examine robot explanation behaviours. As humanoid robots remain experimental tools in the early 2020s, interactions with robots are situated overwhelmingly in controlled environments, typically studying various interactional phenomena. Such interactions suffer from real-world and large-scale experimentation and tend to ignore the 'imperfectness' of the everyday user. Robot explanations can be used to approach and mitigate failures, by expressing robot legibility and incapability, and within the perspective of common-ground. In this paper, I discuss how failures present opportunities for explanations in interactive conversational robots and what the potentials are for the intersection of HRI and explainability research.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. Or Biran and Courtenay Cotton. 2017. Explanation and justification in machine learning: A survey. In IJCAI-17 workshop on explainable AI (XAI), Vol. 8. 8–13.
  2. Susan E Brennan. 1998. The grounding problem in conversations with and through computers. Social and cognitive approaches to interpersonal communication (1998), 201–225.
  3. Joanna Bryson and Alan Winfield. 2017. Standardizing ethical design for artificial intelligence and autonomous systems. Computer 50, 5 (2017), 116–119.
  4. Herbert H Clark and Susan E Brennan. 1991. Grounding in communication. (1991).
  5. Exploring the impact of fault justification in human-robot trust. In Proceedings of the 17th international conference on autonomous agents and multiagent systems. 507–513.
  6. Susan R Fussell and Robert M Krauss. 1992. Coordination of knowledge in communication: effects of speakers’ assumptions about what others know. Journal of personality and Social Psychology 62, 3 (1992), 378.
  7. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEE, 80–89.
  8. Herbert P Grice. 1975. Logic and conversation. In Speech acts. Brill, 41–58.
  9. The need for verbal robot explanations and how people would like a robot to explain itself. ACM Transactions on Human-Robot Interaction (THRI) 10, 4 (2021), 1–42.
  10. Shanee Honig and Tal Oron-Gilad. 2018. Understanding and resolving failures in human-robot interaction: Literature review and model development. Frontiers in psychology 9 (2018), 861.
  11. iBCM: Interactive Bayesian case model empowering humans via intuitive interaction. (2015).
  12. Dimosthenis Kontogiorgos. 2022. Mutual Understanding in Situated Interactions with Conversational User Interfaces: Theory, Studies, and Computation. Ph. D. Dissertation. KTH Royal Institute of Technology.
  13. Estimating uncertainty in task-oriented dialogue. In 2019 International Conference on Multimodal Interaction. 414–418.
  14. Behavioural responses to robot conversational failures. In 2020 15th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 53–62.
  15. A systematic cross-corpus analysis of human reactions to robot conversational failures. In Proceedings of the 2021 International Conference on Multimodal Interaction. 112–120.
  16. Expressing robot incapability. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. 87–95.
  17. Explainable agency for intelligent autonomous systems. In Twenty-Ninth IAAI Conference.
  18. A Survey of Explainable Reinforcement Learning. arXiv preprint arXiv:2202.08434 (2022).
  19. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1–38.
  20. Definition, conceptualisation and measurement of trust. Dagstuhl Reports 11, 8 (2022), 101–105.
  21. ” Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.
  22. Explanation as a social practice: Toward a conceptual framework for the social design of AI systems. IEEE Transactions on Cognitive and Developmental Systems 13, 3 (2020), 717–728.
  23. Lindsay Sanneman and Julie A Shah. 2022. The Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systems. International Journal of Human–Computer Interaction 38, 18-20 (2022), 1772–1788.
  24. Transparency through Explanations and Justifications in Human-Robot Task-Based Communications. International Journal of Human–Computer Interaction 38, 18-20 (2022), 1739–1752.
  25. Detecting contingency for HRI in open-world environments. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. 425–433.
  26. A comparison of visualisation methods for disambiguating verbal requests in human-robot interaction. In 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN). IEEE, 43–50.
  27. Generating explanations of action failures in a cognitive robotic architecture. In 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence. 67–72.
  28. Correct me if I’m wrong: Using non-experts to repair reinforcement learning policies. In 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 493–501.
  29. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–15.
  30. Towards demystifying subliminal persuasiveness: using XAI-techniques to highlight persuasive markers of public speeches. In Explainable, Transparent Autonomous Agents and Multi-Agent Systems: Second International Workshop, EXTRAAMAS 2020, Auckland, New Zealand, May 9–13, 2020, Revised Selected Papers 2. Springer, 113–128.
  31. Evaluating effects of user experience and system transparency on trust in automation. In 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI. IEEE, 408–416.

Summary

We haven't generated a summary for this paper yet.