Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Happily Error After: Framework Development and User Study for Correcting Robot Perception Errors in Virtual Reality (2306.14589v1)

Published 26 Jun 2023 in cs.RO and cs.HC

Abstract: While we can see robots in more areas of our lives, they still make errors. One common cause of failure stems from the robot perception module when detecting objects. Allowing users to correct such errors can help improve the interaction and prevent the same errors in the future. Consequently, we investigate the effectiveness of a virtual reality (VR) framework for correcting perception errors of a Franka Panda robot. We conducted a user study with 56 participants who interacted with the robot using both VR and screen interfaces. Participants learned to collaborate with the robot faster in the VR interface compared to the screen interface. Additionally, participants found the VR interface more immersive, enjoyable, and expressed a preference for using it again. These findings suggest that VR interfaces may offer advantages over screen interfaces for human-robot interaction in erroneous environments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. J. J. Roldán, E. Peña-Tapia, A. Martín-Barrio, M. A. Olivares-Méndez, J. Del Cerro, and A. Barrientos, “Multi-robot interfaces and operator situational awareness: Study of the impact of immersion and prediction,” Sensors, vol. 17, no. 8, p. 1720, 2017.
  2. M. Wozniak, C. T. Chang, M. B. Luebbers, B. Ikeda, M. Walker, E. Rosen, and T. R. Groechel, “Virtual, augmented, and mixed reality for human-robot interaction (vam-hri),” in Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, pp. 938–940, 2023.
  3. N. Schütte, B. Mac Namee, and J. Kelleher, “Robot perception errors and human resolution strategies in situated human–robot dialogue,” Advanced Robotics, vol. 31, no. 5, pp. 243–257, 2017.
  4. S. Wallkötter, S. Tulli, G. Castellano, A. Paiva, and M. Chetouani, “Explainable embodied agents through social cues: A review,” J. Hum.-Robot Interact., vol. 10, jul 2021.
  5. E. Rosen, D. Whitney, E. Phillips, D. Ullman, and S. Tellex, “Testing robot teleoperation using a virtual reality interface with ros reality,” in Proceedings of the 1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI), pp. 1–4, 2018.
  6. M. Walker, H. Hedayati, J. Lee, and D. Szafir, “Communicating robot motion intent with augmented reality,” in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 316–324, 2018.
  7. K. Chandan, V. Kudalkar, X. Li, and S. Zhang, “Arroch: Augmented reality for robots collaborating with a human,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 3787–3793, IEEE, 2021.
  8. M. K. Wozniak and P. Jensfelt, “Virtual reality framework for better human-robot collaboration and mutual understanding,” 2022.
  9. T. R. Groechel, A. O’Connell, M. Nigro, and M. J. Matarić, “Reimagining rviz: Multidimensional augmented reality robot signal design,” in 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1224–1231, IEEE, 2022.
  10. X. Wang, C. J. Liang, C. C. Menassa, and V. R. Kamat, “Interactive and immersive process-level digital twin for collaborative human–robot construction work,” Journal of Computing in Civil Engineering, vol. 35, no. 6, p. 04021023, 2021.
  11. V. Ortenzi, M. Filipovica, D. Abdlkarim, T. Pardi, C. Takahashi, A. Wing, M. Di Luca, and K. J. Kuchenbecker, “Robot, pass me the tool: Handle visibility facilitates task-oriented handovers,” in Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 1–9, 2022.
  12. J. Guzzi, G. Abbate, A. Paolillo, and A. Giusti, “Interacting with a conveyor belt in virtual reality using pointing gestures,” in Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’22, p. 1194–1195, IEEE Press, 2022.
  13. M. B. Luebbers, C. Brooks, C. L. Mueller, D. Szafir, and B. Hayes, “Arc-lfd: Using augmented reality for interactive long-term robot skill maintenance via constrained learning from demonstration,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 3794–3800, 2021.
  14. H. Hedayati, M. Walker, and D. Szafir, “Improving collocated robot teleoperation with augmented reality,” in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 78–86, 2018.
  15. T. Groechel, Z. Shi, R. Pakkar, and M. J. Matarić, “Using socially expressive mixed reality arms for enhancing low-expressivity robots,” in 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1–8, IEEE, 2019.
  16. A. Pereira, E. J. Carter, I. Leite, J. Mars, and J. F. Lehman, “Augmented reality dialog interface for multimodal teleoperation,” in 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 764–771, IEEE, 2017.
  17. B. Ikeda and D. Szafir, “An ar debugging tool for robotics programmers,” in International Workshop on Virtual, Augmented, and Mixed-Reality for Human-Robot Interaction (VAM-HRI), 2021.
  18. E. Rosen, D. Whitney, M. Fishman, D. Ullman, and S. Tellex, “Mixed reality as a bidirectional communication interface for human-robot interaction,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 11431–11438, IEEE, 2020.
  19. S. Tolmeijer, A. Weiss, M. Hanheide, F. Lindner, T. M. Powers, C. Dixon, and M. L. Tielman, “Taxonomy of trust-relevant failures and mitigation strategies,” in Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 3–12, 2020.
  20. D. J. Brooks, M. Begum, and H. A. Yanco, “Analysis of reactions towards failures and recovery strategies for autonomous robots,” in 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 487–492, IEEE, 2016.
  21. C. Esterwood and L. P. Robert, “A literature review of trust repair in hri,” in 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1641–1646, IEEE, 2022.
  22. M. Mara, J.-P. Stein, M. E. Latoschik, B. Lugrin, C. Schreiner, R. Hostettler, and M. Appel, “User responses to a humanoid robot observed in real life, virtual reality, 3d and 2d,” Frontiers in Psychology, vol. 12, p. 633178, 2021.
  23. M. K. Wozniak, R. Stower, P. Jensfelt, and A. Pereira, “What you see is (not) what you get: A vr framework for correcting robot errors,” arXiv preprint arXiv:2301.04919, 2023.
  24. M. Moletta, M. K. Wozniak, M. C. Welle, and D. Kragic, “A virtual reality framework for human-robot collaboration in cloth folding,” arXiv preprint arXiv:2305.07493, 2023.
  25. Y. Wu, A. Kirillov, F. Massa, W.-Y. Lo, and R. Girshick, “Detectron2.” https://github.com/facebookresearch/detectron2, 2019.
  26. B. G. Witmer and M. J. Singer, “Measuring presence in virtual environments: A presence questionnaire,” Presence, vol. 7, no. 3, pp. 225–240, 1998.
  27. S. Diefenbach, N. Kolb, and M. Hassenzahl, “The’hedonic’in human-computer interaction: history, contributions, and future research directions,” in Proceedings of the 2014 conference on Designing interactive systems, pp. 305–314, 2014.
  28. V. Venkatesh, J. Y. Thong, and X. Xu, “Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology,” MIS quarterly, pp. 157–178, 2012.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Maciej K. Wozniak (11 papers)
  2. Rebecca Stower (5 papers)
  3. Patric Jensfelt (48 papers)
  4. Andre Pereira (25 papers)
Citations (4)