Predicting Human Impressions of Robot Performance During Navigation Tasks
Abstract: Human impressions of robot performance are often measured through surveys. As a more scalable and cost-effective alternative, we investigate the possibility of predicting people's impressions of robot behavior using non-verbal behavioral cues and machine learning techniques. To this end, we first contribute the SEAN TOGETHER Dataset consisting of observations of an interaction between a person and a mobile robot in a VR simulation, together with impressions of robot performance provided by users on a 5-point scale. Second, we contribute analyses of how well humans and supervised learning techniques can predict perceived robot performance based on different observation types (like facial expression features, and features that describe the navigation behavior of the robot and pedestrians). Our results suggest that facial expressions alone provide useful information about human impressions of robot performance; but in the navigation scenarios that we considered, reasoning about spatial features in context is critical for the prediction task. Also, supervised learning techniques showed promise because they outperformed humans' predictions of robot performance in most cases. Further, when predicting robot performance as a binary classification task on unseen users' data, the F1 Score of machine learning models more than doubled in comparison to predicting performance on a 5-point scale. This suggested that the models can have good generalization capabilities, although they are better at telling the directionality of robot performance than predicting exact performance ratings. Based on our findings in simulation, we conducted a real-world demonstration in which a mobile robot uses a machine learning model to predict how a human that follows it perceives it. Finally, we discuss the implications of our results for implementing such supervised learning models in real-world navigation scenarios.
- R. M. Aronson and H. Admoni, “Gaze for error detection during human-robot shared manipulation,” in Fundamentals of Joint Action workshop, Robotics: Science and Systems, 2018, p. 5.
- Y. Cui, Q. Zhang, B. Knox, A. Allievi, P. Stone, and S. Niekum, “The empathic framework for task learning from implicit human feedback,” in Conference on Robot Learning. PMLR, 2021, pp. 604–626.
- M. Stiber, R. Taylor, and C.-M. Huang, “Modeling human response to robot errors for timely error detection,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022, pp. 676–683.
- Q. Zhang, A. Narcomey, K. Candon, and M. Vázquez, “Self-annotation methods for aligning implicit and explicit human feedback in human-robot interaction,” in Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, 2023, pp. 398–407.
- R. A. Knepper, C. I. Mavrogiannis, J. Proft, and C. Liang, “Implicit communication in a joint action,” in Proceedings of the 2017 acm/ieee international conference on human-robot interaction, 2017, pp. 283–292.
- D. Sadigh, S. Sastry, S. A. Seshia, and A. D. Dragan, “Planning for autonomous cars that leverage effects on human actions.” in Robotics: Science and systems, vol. 2. Ann Arbor, MI, USA, 2016, pp. 1–9.
- N. Mitsunaga, C. Smith, T. Kanda, H. Ishiguro, and N. Hagita, “Adapting robot behavior for human–robot interaction,” IEEE Transactions on Robotics, vol. 24, no. 4, pp. 911–916, 2008.
- A. Kendon, “Goffman’s approach to face-to-face interaction,” Erving Goffman: Exploring the interaction order, 1988.
- M. Stiber, R. H. Taylor, and C.-M. Huang, “On using social signals to enable flexible error-aware hri,” in Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, ser. HRI ’23. New York, NY, USA: Association for Computing Machinery, 2023, p. 222–230. [Online]. Available: https://doi.org/10.1145/3568162.3576990
- N. Tsoi, A. Xiang, P. Yu, S. S. Sohn, G. Schwartz, S. Ramesh, M. Hussein, A. W. Gupta, M. Kapadia, and M. Vázquez, “Sean 2.0: Formalizing and generating social situations for robot navigation,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 11 047–11 054, 2022.
- Q. Zhang, N. Tsoi, and M. Vázquez, “Sean-vr: An immersive virtual reality experience for evaluating social robot navigation,” in Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, 2023, pp. 902–904.
- Y. Gao and C.-M. Huang, “Evaluation of socially-aware robot navigation,” Frontiers in Robotics and AI, vol. 8, p. 721317, 2022.
- C. Mavrogiannis, F. Baldini, A. Wang, D. Zhao, P. Trautman, A. Steinfeld, and J. Oh, “Core challenges of social robot navigation: A survey,” ACM Transactions on Human-Robot Interaction, vol. 12, no. 3, pp. 1–39, 2023.
- A. Francis, C. Pérez-d’Arpino, C. Li, F. Xia, A. Alahi, R. Alami, A. Bera, A. Biswas, J. Biswas, R. Chandra et al., “Principles and guidelines for evaluating social robot navigation algorithms,” arXiv preprint arXiv:2306.16740, 2023.
- X. Z. Tan, S. Reig, E. J. Carter, and A. Steinfeld, “From one to another: how robot-robot interaction affects users’ perceptions following a transition between robots,” in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2019, pp. 114–122.
- S.-Y. Lo, K. Yamane, and K.-i. Sugiyama, “Perception of pedestrian avoidance strategies of a self-balancing mobile robot,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019, pp. 1243–1250.
- S. Pirk, E. Lee, X. Xiao, L. Takayama, A. Francis, and A. Toshev, “A protocol for validating social navigation policies,” arXiv preprint arXiv:2204.05443, 2022.
- A. L. Thomaz and C. Breazeal, “Teachable robots: Understanding human teaching behavior to build more effective robot learners,” Artificial Intelligence, vol. 172, no. 6-7, pp. 716–737, 2008.
- Y. Cui, P. Koppol, H. Admoni, S. Niekum, R. Simmons, A. Steinfeld, and T. Fitzgerald, “Understanding the relationship between interactions and outcomes in human-in-the-loop machine learning,” in International Joint Conference on Artificial Intelligence, 2021.
- A. Bera, T. Randhavane, and D. Manocha, “Improving socially-aware multi-channel human emotion prediction for robot navigation.” in CVPR Workshops, 2019, pp. 21–27.
- C. M. Carpinella, A. B. Wyman, M. A. Perez, and S. J. Stroessner, “The robotic social attributes scale (rosas) development and validation,” in Proceedings of the 2017 ACM/IEEE International Conference on human-robot interaction, 2017, pp. 254–262.
- C. Mavrogiannis, P. Alves-Oliveira, W. Thomason, and R. A. Knepper, “Social momentum: Design and evaluation of a framework for socially competent robot navigation,” ACM Transactions on Human-Robot Interaction (THRI), vol. 11, no. 2, pp. 1–37, 2022.
- N. Tsoi, M. Hussein, O. Fugikawa, J. Zhao, and M. Vázquez, “An approach to deploy interactive robotic simulators on the web for hri experiments: Results in social robot navigation,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021, pp. 7528–7535.
- G. Angelopoulos, A. Rossi, C. Di Napoli, and S. Rossi, “You are in my way: Non-verbal social cues for legible robot navigation behaviors,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022, pp. 657–662.
- C. Asavanant and H. Umemuro, “Personal space violation by a robot: An application of expectation violation theory in human-robot interaction,” in 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN). IEEE, 2021, pp. 1181–1188.
- M. Brandao, G. Canal, S. Krivić, P. Luff, and A. Coles, “How experts explain motion planner output: a preliminary user-study to inform the design of explainable planners,” in 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN). IEEE, 2021, pp. 299–306.
- A. D. Dragan, K. C. Lee, and S. S. Srinivasa, “Legibility and predictability of robot motion,” in 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2013, pp. 301–308.
- A. Sciutti, M. Mara, V. Tagliasco, and G. Sandini, “Humanizing human-robot interaction: On the importance of mutual understanding,” IEEE Technology and Society Magazine, vol. 37, no. 1, pp. 22–29, 2018.
- A. D. Dragan, S. Bauman, J. Forlizzi, and S. S. Srinivasa, “Effects of robot motion on human-robot collaboration,” in Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, 2015, pp. 51–58.
- E. Bıyık, A. Talati, and D. Sadigh, “Aprel: A library for active preference-based reward learning algorithms,” in 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2022, pp. 613–617.
- A. Suresh, A. Taylor, L. D. Riek, and S. Martinez, “Robot navigation in risky, crowded environments: Understanding human preferences,” arXiv preprint arXiv:2303.08284, 2023.
- E. Avrunin and R. Simmons, “Socially-appropriate approach paths using human data,” in The 23rd IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 2014, pp. 1037–1042.
- C. Rivoire and A. Lim, “The delicate balance of boring and annoying: Learning proactive timing in long-term human robot interaction,” 2016.
- B. Gucsi, D. S. Tarapore, W. Yeoh, C. Amato, and L. Tran-Thanh, “To ask or not to ask: A user annoyance aware preference elicitation framework for social robots,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020, pp. 7935–7940.
- J. Lin, Z. Ma, R. Gomez, K. Nakamura, B. He, and G. Li, “A review on interactive reinforcement learning from human social feedback,” IEEE Access, vol. 8, pp. 120 757–120 765, 2020.
- H. Gunes, M. Piccardi, and M. Pantic, “From the lab to the real world: Affect recognition using multiple cues and modalities,” in Affective Computing. IntechOpen, 2008.
- K. Candon, J. Chen, Y. Kim, Z. Hsu, N. Tsoi, , and M. Vázquez, “Nonverbal human signals can help autonomous agents infer human preferences for their behavior,” in Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems, 2023.
- A. Vinciarelli, M. Pantic, and H. Bourlard, “Social signal processing: Survey of an emerging domain,” Image and vision computing, vol. 27, no. 12, pp. 1743–1759, 2009.
- Y. Yan, C. Yu, W. Zheng, R. Tang, X. Xu, and Y. Shi, “Frownonerror: Interrupting responses from smart speakers by facial expressions,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020, pp. 1–14.
- L. Wachowiak, P. Tisnikar, G. Canal, A. Coles, M. Leonetti, and O. Celiktutan, “Analysing eye gaze patterns during confusion and errors in human–agent collaborations,” in 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, 2022, pp. 224–229.
- E. McQuillin, N. Churamani, and H. Gunes, “Learning socially appropriate robo-waiter behaviours through real-time user feedback,” in 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2022, pp. 541–550.
- M. Stiber, “Effective human-robot collaboration via generalized robot error management using natural human responses,” in Proceedings of the 2022 International Conference on Multimodal Interaction, 2022, pp. 673–678.
- D. P.-O. Bos, B. Reuderink, B. van de Laar, H. Gürkök, C. Mühl, M. Poel, D. Heylen, and A. Nijholt, “Human-computer interaction for bci games: Usability and user experience,” in 2010 International Conference on Cyberworlds. IEEE, 2010, pp. 277–281.
- K. Muelling, A. Venkatraman, J.-S. Valois, J. Downey, J. Weiss, S. Javdani, M. Hebert, A. B. Schwartz, J. L. Collinger, and J. A. Bagnell, “Autonomy infused teleoperation with application to bci manipulation,” arXiv preprint arXiv:1503.05451, 2015.
- D. Xu, M. Agarwal, E. Gupta, F. Fekri, and R. Sivakumar, “Accelerating reinforcement learning agent with eeg-based implicit human feedback,” arXiv preprint arXiv:2006.16498, 2020.
- A. Steinfeld, O. C. Jenkins, and B. Scassellati, “The oz of wizard: simulating the human for interaction research,” in Proceedings of the 4th ACM/IEEE international conference on Human robot interaction, 2009, pp. 101–108.
- S. Lemaignan, M. Hanheide, M. Karg, H. Khambhaita, L. Kunze, F. Lier, I. Lütkebohle, and G. Milliez, “Simulation and hri recent perspectives with the morse simulator,” in Simulation, Modeling, and Programming for Autonomous Robots: 4th International Conference, SIMPAR 2014, Bergamo, Italy, October 20-23, 2014. Proceedings 4. Springer, 2014, pp. 13–24.
- G. Silvera, A. Biswas, and H. Admoni, “Dreye vr: Democratizing virtual reality driving simulation for behavioural & interaction research,” in 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2022, pp. 639–643.
- C. Li, F. Xia, R. Martín-Martín, M. Lingelbach, S. Srivastava, B. Shen, K. Vainio, C. Gokmen, G. Dharan, T. Jain et al., “igibson 2.0: Object-centric simulation for robot learning of everyday household tasks,” arXiv preprint arXiv:2108.03272, 2021.
- A. Favier, P.-T. Singamaneni, and R. Alami, “An intelligent human simulation (inhus) for developing and experimenting human-aware and interactive robot abilities,” 2021.
- J. Hart, R. Mirsky, X. Xiao, and P. Stone, “Incorporating gaze into social navigation,” arXiv preprint arXiv:2107.04001, 2021.
- L. Kästner, T. Bhuiyan, T. A. Le, E. Treis, J. Cox, B. Meinardus, J. Kmiecik, R. Carstens, D. Pichel, B. Fatloun et al., “Arena-bench: A benchmarking suite for obstacle avoidance approaches in highly dynamic environments,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 9477–9484, 2022.
- Y. Liu, G. Novotny, N. Smirnov, W. Morales-Alvarez, and C. Olaverri-Monreal, “Mobile delivery robots: mixed reality-based simulation relying on ros and unity 3d,” in 2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2020, pp. 15–20.
- T. Williams, D. Szafir, T. Chakraborti, and H. Ben Amor, “Virtual, augmented, and mixed reality for human-robot interaction,” in Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 2018, pp. 403–404.
- T. Inamura and Y. Mizuchi, “Sigverse: A cloud-based vr platform for research on multimodal human-robot interaction,” Frontiers in Robotics and AI, vol. 8, p. 549360, 2021.
- M. Dianatfar, J. Latokartano, and M. Lanz, “Review on existing vr/ar solutions in human–robot collaboration,” Procedia CIRP, vol. 97, pp. 407–411, 2021.
- R. Suzuki, A. Karim, T. Xia, H. Hedayati, and N. Marquardt, “Augmented reality and robotics: A survey and taxonomy for ar-enhanced human-robot interaction and robotic interfaces,” in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 2022, pp. 1–33.
- O. Phaijit, M. Obaid, C. Sammut, and W. Johal, “A taxonomy of functional augmented reality for human-robot interaction,” in 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2022, pp. 294–303.
- M. Walker, T. Phung, T. Chakraborti, T. Williams, and D. Szafir, “Virtual, augmented, and mixed reality for human-robot interaction: A survey and virtual design element taxonomy,” arXiv preprint arXiv:2202.11249, 2022.
- VIVE, “Vive pro eye overview,” 2019. [Online]. Available: https://www.vive.com/sea/product/vive-pro-eye/overview/
- O. Celiktutan, E. Skordos, and H. Gunes, “Multimodal human-human-robot interactions (mhhri) dataset for studying personality and engagement,” IEEE Transactions on Affective Computing, vol. 10, no. 4, pp. 484–497, 2019.
- M. F. Jung, “Coupling interactions and performance: Predicting team performance from thin slices of conflict,” ACM Transactions on Computer-Human Interaction (TOCHI), vol. 23, no. 3, pp. 1–32, 2016.
- N. T. Fitter and K. J. Kuchenbecker, “Designing and assessing expressive open-source faces for the baxter robot,” in Social Robotics: 8th International Conference, ICSR 2016, Kansas City, MO, USA, November 1-3, 2016 Proceedings 8. Springer, 2016, pp. 340–350.
- M. A. Rana, D. Chen, J. Williams, V. Chu, S. R. Ahmadzadeh, and S. Chernova, “Benchmark for skill learning from demonstration: Impact of user experience, task complexity, and start configuration on performance,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 7561–7567.
- E. Cha, N. T. Fitter, Y. Kim, T. Fong, and M. Matarić, “Generating expressive light signals for appearance-constrained robots,” in Proceedings of the 2018 International Symposium on Experimental Robotics. Springer, 2020, pp. 595–607.
- P. Jonell, Y. Yoon, P. Wolfert, T. Kucherenko, and G. E. Henter, “Hemvip: Human evaluation of multiple videos in parallel,” in Proceedings of the 2021 International Conference on Multimodal Interaction, 2021, pp. 707–711.
- M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Ng, “Ros: an open-source robot operating system,” vol. 3, 01 2009.
- D. V. Lu, D. Hershberger, and W. D. Smart, “Layered costmaps for context-sensitive navigation,” in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2014, pp. 709–715.
- M. Shiomi, T. Kanda, H. Ishiguro, and N. Hagita, “A larger audience, please!—encouraging people to listen to a guide robot,” in 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2010, pp. 31–38.
- W. Jensen, S. Hansen, and H. Knoche, “Knowing you, seeing me: Investigating user preferences in drone-human acknowledgement,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 1–12.
- G. Grisetti, C. Stachniss, and W. Burgard, “Improved techniques for grid mapping with rao-blackwellized particle filters,” IEEE transactions on Robotics, vol. 23, no. 1, pp. 34–46, 2007.
- O. Palinko, F. Rea, G. Sandini, and A. Sciutti, “Robot reading human gaze: Why eye tracking is better than head tracking for human-robot collaboration,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016, pp. 5048–5054.
- D. A. Harville, “Maximum likelihood approaches to variance component estimation and to related problems,” Journal of the American Statistical Association, vol. 72, no. 358, pp. 320–338, 1977. [Online]. Available: http://www.jstor.org/stable/2286796
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner et al., “Relational inductive biases, deep learning, and graph networks,” arXiv preprint arXiv:1806.01261, 2018.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
- L. Prechelt, “Early stopping-but when?” in Neural Networks: Tricks of the trade. Springer, 2002, pp. 55–69.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.