Dynamic Human Trust Modeling of Autonomous Agents With Varying Capability and Strategy (2404.19291v1)
Abstract: Objective We model the dynamic trust of human subjects in a human-autonomy-teaming screen-based task. Background Trust is an emerging area of study in human-robot collaboration. Many studies have looked at the issue of robot performance as a sole predictor of human trust, but this could underestimate the complexity of the interaction. Method Subjects were paired with autonomous agents to search an on-screen grid to determine the number of outlier objects. In each trial, a different autonomous agent with a preassigned capability used one of three search strategies and then reported the number of outliers it found as a fraction of its capability. Then, the subject reported their total outlier estimate. Human subjects then evaluated statements about the agent's behavior, reliability, and their trust in the agent. Results 80 subjects were recruited. Self-reported trust was modeled using Ordinary Least Squares, but the group that interacted with varying capability agents on a short time order produced a better performing ARIMAX model. Models were cross-validated between groups and found a moderate improvement in the next trial trust prediction. Conclusion A time series modeling approach reveals the effects of temporal ordering of agent performance on estimated trust. Recency bias may affect how subjects weigh the contribution of strategy or capability to trust. Understanding the connections between agent behavior, agent performance, and human trust is crucial to improving human-robot collaborative tasks. Application The modeling approach in this study demonstrates the need to represent autonomous agent characteristics over time to capture changes in human trust.
- Clustering Trust Dynamics in a Human-Robot Sequential Decision-Making Task. IEEE Robotics and Automation Letters, 7(4):8815–8822.
- Planning with Trust for Human-Robot Collaboration. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pages 307–315, Chicago IL USA. ACM.
- Trusting Automation: Designing for Responsivity and Resilience. Human Factors: The Journal of the Human Factors and Ergonomics Society, 65(1):137–165.
- Anthropomorphism Moderates the Relationships of Dispositional, Perceptual, and Behavioral Trust in a Robot Teammate. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 67(1):529–536.
- Planning Time to Think: Metareasoning for On-Line Planning with Durative Actions. Proceedings of the International Conference on Automated Planning and Scheduling, 27:56–60.
- Human Trust of Autonomous Agent Varies With Strategy and Capability in Collaborative Grid Search Task. In 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS), pages 1–6, Magdeburg, Germany. IEEE.
- Effects of changing reliability on trust of robot systems. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, pages 73–80, Boston Massachusetts USA. ACM.
- Legibility and predictability of robot motion. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages 301–308, Tokyo, Japan. IEEE.
- Endsley, M. R. (1995). Toward a Theory of Situation Awareness in Dynamic Systems. Human Factors: The Journal of the Human Factors and Ergonomics Society, 37(1):32–64.
- Modeling and Predicting Trust Dynamics in Human–Robot Teaming: A Bayesian Inference Approach. International Journal of Social Robotics, 13(8):1899–1909.
- Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Human Factors: The Journal of the Human Factors and Ergonomics Society, 57(3):407–434.
- Computational Modeling of the Dynamics of Human Trust During Human–Machine Interactions. IEEE Transactions on Human-Machine Systems, 49(6):485–497.
- Forecasting: principles and practice. OTexts, Melbourne, Australia, 2nd edition.
- Foundations for an Empirically Determined Scale of Trust in Automated Systems. International Journal of Cognitive Ergonomics, 4(1):53–71.
- Measurement of Trust in Automation: A Narrative Review and Reference Guide. Frontiers in Psychology, 12:604977.
- Trust, control strategies and allocation of function in human-machine systems. Ergonomics, 35(10):1243–1270.
- Trust in Automation: Designing for Appropriate Reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society, 46(1):50–80.
- Clustering Human Trust Dynamics for Customized Real-Time Prediction. In 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), pages 1705–1712, Indianapolis, IN, USA. IEEE.
- Adaptive trust calibration for human-AI collaboration. PLOS ONE, 15(2):e0229132.
- Antecedents of trust in human-robot collaborations. In 2011 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), pages 175–178, Miami Beach, FL, USA. IEEE.
- Humans and Automation: Use, Misuse, Disuse, Abuse. Human Factors: The Journal of the Human Factors and Ergonomics Society, 39(2):230–253.
- A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 30(3):286–297.
- Effect of Robot Performance on Human–Robot Trust in Time-Critical Situations. IEEE Transactions on Human-Machine Systems, 47(4):425–436.
- A review of mathematical models of human trust in automation. Frontiers in Neuroergonomics, 4:1171403.
- Statsmodels: Econometric and Statistical Modeling with Python. pages 92–96, Austin, Texas.
- Deconstructed Trustee Theory: Disentangling Trust in Body and Identity in Multi-Robot Distributed Systems. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, pages 262–271, Boulder CO USA. ACM.
- Toward Quantifying Trust Dynamics: How People Adjust Their Trust After Moment-to-Moment Interaction With Automation. Human Factors: The Journal of the Human Factors and Ergonomics Society, 65(5):862–878.