Trust-Aware Assistance Seeking in Human-Supervised Autonomy (2410.20496v1)
Abstract: Our goal is to model and experimentally assess trust evolution to predict future beliefs and behaviors of human-robot teams in dynamic environments. Research suggests that maintaining trust among team members in a human-robot team is vital for successful team performance. Research suggests that trust is a multi-dimensional and latent entity that relates to past experiences and future actions in a complex manner. Employing a human-robot collaborative task, we design an optimal assistance-seeking strategy for the robot using a POMDP framework. In the task, the human supervises an autonomous mobile manipulator collecting objects in an environment. The supervisor's task is to ensure that the robot safely executes its task. The robot can either choose to attempt to collect the object or seek human assistance. The human supervisor actively monitors the robot's activities, offering assistance upon request, and intervening if they perceive the robot may fail. In this setting, human trust is the hidden state, and the primary objective is to optimize team performance. We execute two sets of human-robot interaction experiments. The data from the first experiment are used to estimate POMDP parameters, which are used to compute an optimal assistance-seeking policy evaluated in the second experiment. The estimated POMDP parameters reveal that, for most participants, human intervention is more probable when trust is low, particularly in high-complexity tasks. Our estimates suggest that the robot's action of asking for assistance in high-complexity tasks can positively impact human trust. Our experimental results show that the proposed trust-aware policy is better than an optimal trust-agnostic policy. By comparing model estimates of human trust, obtained using only behavioral data, with the collected self-reported trust values, we show that model estimates are isomorphic to self-reported responses.
- Dynamic modeling of trust in human-machine interactions. In American Control Conference (ACC) (2017), pp. 1542–1548.
- Toward adaptive trust calibration for level 2 driving automation. In Proceedings of International Conference on Multimodal Interaction (2020), pp. 538–547.
- Human trust-based feedback control: Dynamically varying automation transparency to optimize human-machine interactions. IEEE Control Systems Magazine 40, 6 (2020), 98–116.
- Real-time estimation of drivers’ trust in automated driving systems. International Journal of Social Robotics 13, 8 (2021), 1911–1927.
- Toward an understanding of trust repair in human-robot interaction: Current research and future directions. ACM Transactions on Interactive Intelligent Systems (TiiS) 8, 4 (2018), 1–30.
- Input-output HMMs for sequence processing. IEEE Transactions on Neural Networks 7, 5 (1996), 1231–1249.
- Examining profiles for robotic risk assessment: Does a robot’s approach to risk affect user trust? In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (2020), pp. 23–31.
- Trust-aware decision making for human-robot collaboration: Model learning and planning. ACM Transactions on Human-Robot Interaction 9, 2 (2020), 1–23.
- Impact of robot failures and feedback on real-time trust. In ACM/IEEE International Conference on Human-Robot Interaction (2013), pp. 251–258.
- Google scanned objects: A high-quality dataset of 3D scanned household items. In International Conference on Robotics and Automation (2022), pp. 2553–2560.
- A review on human–machine trust evaluation: Human-centric and machine-centric perspectives. IEEE Transactions on Human-Machine Systems 52, 5 (2022), 952–962.
- Structural properties of optimal fidelity selection policies for human-in-the-loop queues. Automatica 159 (2024), 111388.
- A meta-analysis of factors affecting trust in human-robot interaction. Human Factors 53, 5 (2011), 517–527.
- Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors 57, 3 (2015), 407–434.
- Formal analysis of models for the dynamics of trust based on experiences. In European Workshop on Modelling Autonomous Agents in a Multi-Agent World (1999), pp. 221–231.
- Design and use paradigms for Gazebo, an open-source multi-robot simulator. In IEEE/RSJ International Conference on Intelligent Robots and Systems (2004), pp. 2149–2154.
- Krishnamurthy, V. Partially Observed Markov Decision Processes. Cambridge University Press, 2016.
- Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35, 10 (1992), 1243–1270.
- Trust in technology: Designing for appropriate reliance. Human Factors 46, 1 (2004), 50–80.
- Trust, self-confidence, and operators’ adaptation to automation. International Journal of Human-Computer Studies 40, 1 (1994), 153–184.
- Trust-based variable impedance control of human–robot cooperative manipulation. Robotics and Computer-Integrated Manufacturing 88 (2024), 102730.
- On trust-aware assistance-seeking in human-supervised autonomy. In American Control Conference (San Diego, CA, June 2023), pp. 3901–3906.
- Assistance-seeking in human-supervised autonomy: Role of trust and secondary task engagement. In IEEE International Conference on Robot and Human Interactive Communication (Pasadena, CA, August 2024). to appear.
- Human performance consequences of automated decision aids: The impact of degree of automation and system experience. Journal of Cognitive Engineering and Decision Making 6, 1 (2012), 57–87.
- An integrative model of organizational trust. Academy of management review 20, 3 (1995), 709–734.
- Muir, B. M. Operators’ Trust in and Use of Automatic Controllers in a Supervisory Process Control Task. PhD thesis, University of Toronto, 1989.
- Murphy, K. P. Probabilistic Machine Learning: Advanced Topics. MIT Press, 2023.
- Control sharing in human-robot team interaction. Annual Reviews in Control 44 (2017), 342–354.
- Humans and automation: Use, misuse, disuse, abuse. Human Factors 39, 2 (1997), 230–253.
- Human supervisory control of robotic teams: Integrating cognitive modeling with engineering design. IEEE Control Systems Magazine 35, 6 (2015), 57–80.
- Mutual trust-based subtask allocation for human–robot collaboration in flexible lightweight assembly in manufacturing. Mechatronics 54 (2018), 94–109.
- ROS Robotics Projects : Build and Control Robots Powered by the Robot Operating System, Machine Learning, and Virtual Reality., vol. Second edition. Packt Publishing, 2019.
- Model Predictive Control: Theory, Computation, and Design, 2 ed. Nob Hill Publishing Madison, WI, 2017.
- Would you trust a (faulty) robot? effects of error, task type and personality on human-robot cooperation and trust. In ACM/IEEE International Conference on Human-Robot Interaction (2015), pp. 141–148.
- Schaefer, K. E. Measuring trust in human robot interactions: Development of the “trust perception scale-HRI”. In Robust Intelligence and Trust in Autonomous Systems. Springer, 2016, pp. 191–218.
- Beyond dirty, dangerous and dull: what everyday people think robots should do. In Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction (2008), pp. 25–32.
- Recognizing situations that demand trust. In IEEE International Conference on Robot and Human Interactive Communication (2011), pp. 7–14.
- Trust calibration within a human-robot team: Comparing automatically generated explanations. In ACM/IEEE International Conference on Human Robot Interaction (2016), p. 109–116.
- Dynamic real-time scheduling for human-agent collaboration systems based on mutual trust. Cyber-Physical Systems 1 (2015), 76 – 90.
- Human trust in robots: A survey on trust models and their controls/robotics applications. IEEE Open Journal of Control Systems 3 (2024), 58–86.
- Human-robot mutual trust in (semi)autonomous underwater robots. In Cooperative Robots and Sensor Networks, A. Koubaa and A. Khelil, Eds. Springer Berlin Heidelberg, 2014, pp. 115–137.
- A computational model of coupled human trust and self-confidence dynamics. ACM Transactions on Human-Robot Interaction 12, 3 (2023), 1–29.
- Optimo: Online probabilistic trust inference model for asymmetric human-robot collaborations. In ACM/IEEE International Conference on Human-Robot Interaction (2015), pp. 221–228.
- Evaluating effects of user experience and system transparency on trust in automation. In ACM/IEEE International Conference on Human-Robot Interaction (2017), pp. 408–416.
- Trust-aware planning: Modeling trust evolution in iterated human-robot interaction. In ACM/IEEE International Conference on Human-Robot Interaction (2023), pp. 281–289.