Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human Trust-based Feedback Control: Dynamically varying automation transparency to optimize human-machine interactions (2006.16353v1)

Published 29 Jun 2020 in cs.HC, cs.RO, cs.SY, and eess.SY

Abstract: Human trust in automation plays an essential role in interactions between humans and automation. While a lack of trust can lead to a human's disuse of automation, over-trust can result in a human trusting a faulty autonomous system which could have negative consequences for the human. Therefore, human trust should be calibrated to optimize human-machine interactions with respect to context-specific performance objectives. In this article, we present a probabilistic framework to model and calibrate a human's trust and workload dynamics during his/her interaction with an intelligent decision-aid system. This calibration is achieved by varying the automation's transparency---the amount and utility of information provided to the human. The parameterization of the model is conducted using behavioral data collected through human-subject experiments, and three feedback control policies are experimentally validated and compared against a non-adaptive decision-aid system. The results show that human-automation team performance can be optimized when the transparency is dynamically updated based on the proposed control policy. This framework is a first step toward widespread design and implementation of real-time adaptive automation for use in human-machine interactions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Kumar Akash (17 papers)
  2. Griffon McMahon (1 paper)
  3. Tahira Reid (4 papers)
  4. Neera Jain (16 papers)
Citations (36)

Summary

We haven't generated a summary for this paper yet.