Effects of Explanation Strategies to Resolve Failures in Human-Robot Collaboration (2309.10127v1)
Abstract: Despite significant improvements in robot capabilities, they are likely to fail in human-robot collaborative tasks due to high unpredictability in human environments and varying human expectations. In this work, we explore the role of explanation of failures by a robot in a human-robot collaborative task. We present a user study incorporating common failures in collaborative tasks with human assistance to resolve the failure. In the study, a robot and a human work together to fill a shelf with objects. Upon encountering a failure, the robot explains the failure and the resolution to overcome the failure, either through handovers or humans completing the task. The study is conducted using different levels of robotic explanation based on the failure action, failure cause, and action history, and different strategies in providing the explanation over the course of repeated interaction. Our results show that the success in resolving the failures is not only a function of the level of explanation but also the type of failures. Furthermore, while novice users rate the robot higher overall in terms of their satisfaction with the explanation, their satisfaction is not only a function of the robot's explanation level at a certain round but also the prior information they received from the robot.
- S. van Waveren, C. Pek, J. Tumova, and I. Leite, “Correct me if i’m wrong: Using non-experts to repair reinforcement learning policies,” in HRI. ACM/IEEE, 2022, pp. 493–501.
- A. Bauer, D. Wollherr, and M. Buss, “Human–robot collaboration: a survey,” Int. J. Humanoid Robot., vol. 5, no. 01, pp. 47–66, 2008.
- S. Honig and T. Oron-Gilad, “Understanding and resolving failures in human-robot interaction: Literature review and model development,” Frontiers in psychology, vol. 9, p. 861, 2018.
- D. Das, S. Banerjee, and S. Chernova, “Explainable ai for robot failures: Generating explanations that improve user assistance in fault recovery,” in HRI. ACM/IEEE, 2021, pp. 351–360.
- U. B. Karli, S. Cao, and C.-M. Huang, ““what if it is wrong”: Effects of power dynamics and trust repair strategy on trust and compliance in hri,” in HRI. ACM/IEEE, 2023.
- C. Esterwood and L. P. Robert Jr, “Three strikes and you are out!: The impacts of multiple human–robot trust violations and repairs on robot trustworthiness,” Computers in Human Behavior, vol. 142, 2023.
- D. Gunning and D. Aha, “Darpa’s explainable artificial intelligence (xai) program,” AI magazine, vol. 40, no. 2, pp. 44–58, 2019.
- T. Sakai and T. Nagai, “Explainable autonomous robots: a survey and perspective,” Advanced Robotics, vol. 36, no. 5-6, pp. 219–238, 2022.
- M. Eder and G. Steinbauer-Wagner, “A fast method for explanations of failures in optimization-based robot motion planning,” in Advances in Service and Industrial Robotics: RAAD 2022. Springer, pp. 114–121.
- M. Diehl and K. Ramirez-Amaro, “Why did i fail? a causal-based method to find explanations for robot failures,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 8925–8932, 2022.
- F. Correia, C. Guerra, S. Mascarenhas, F. S. Melo, and A. Paiva, “Exploring the impact of fault justification in human-robot trust,” in AAMAS, 2018, p. 507–513.
- S. van der Woerdt and P. Haselager, “Lack of effort or lack of ability? robot failures and human perception of agency and responsibility,” in BNAIC 2016: Artificial Intelligence. Springer, 2017, pp. 155–168.
- S. Reig, E. J. Carter, T. Fong, J. Forlizzi, and A. Steinfeld, “Flailing, hailing, prevailing: Perceptions of multi-robot failure recovery strategies,” in HRI. ACM/IEEE, 2021, pp. 158–167.
- A. Alvanpour, S. K. Das, C. K. Robinson, O. Nasraoui, and D. Popa, “Robot failure mode prediction with explainable machine learning,” in CASE. IEEE, 2020, pp. 61–66.
- T. Miller, “Contrastive explanation: A structural-model approach,” The Knowledge Engineering Review, vol. 36, 2021.
- S. Wallkötter, S. Tulli, G. Castellano, A. Paiva, and M. Chetouani, “Explainable embodied agents through social cues: a review,” Transactions on Human-Robot Interaction, vol. 10, no. 3, pp. 1–24, 2021.
- R. Linder, S. Mohseni, F. Yang, S. K. Pentyala, E. D. Ragan, and X. B. Hu, “How level of explanation detail affects human performance in interpretable intelligent systems: A study on explainable fact checking,” Applied AI Letters, vol. 2, no. 4, p. e49, 2021.
- E. Olson, “AprilTag: A robust and flexible visual fiducial system,” in ICRA. IEEE, 2011, pp. 3400–3407.
- T. Chakraborti, S. Sreedharan, and S. Kambhampati, “The emerging landscape of explainable ai planning and decision making,” arXiv preprint arXiv:2002.11697, 2020.
- P. Khanna, E. Yadollahi, M. Björkman, I. Leite, and C. Smith, “User study exploring the role of explanation of failures by robots in human robot collaboration tasks,” arXiv preprint arXiv:2303.16010, 2023.
- P. Khanna, M. Björkman, and C. Smith, “Human inspired grip-release technique for robot-human handovers,” in Int. Conf. on Humanoid Robots (Humanoids). IEEE/RAS, 2022, pp. 694–701.
- R. R. Hoffman, S. T. Mueller, G. Klein, and J. Litman, “Metrics for explainable ai: Challenges and prospects,” arXiv preprint arXiv:1812.04608, 2018.
- P. Thagard, “Extending explanatory coherence,” Behavioral and brain sciences, vol. 12, no. 3, pp. 490–502, 1989.
- C. G. M. Garza, “Failure is an option: How the severity of robot errors affects human-robot interaction,” PhD Thesis, Pittsburgh: Carnegie Mellon University, 2018.
- M. R. Endsley, “Toward a theory of situation awareness in dynamic systems,” Human factors, vol. 37, no. 1, pp. 32–64, 1995.
- A. Bremers, A. Pabst, M. T. Parreira, and W. Ju, “Using social cues to recognize task failures for hri: A review of current research and future directions,” arXiv preprint arXiv:2301.11972, 2023.