Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Selectively Providing Reliance Calibration Cues With Reliance Prediction (2302.09995v2)

Published 20 Feb 2023 in cs.AI and cs.HC

Abstract: For effective collaboration between humans and intelligent agents that employ machine learning for decision-making, humans must understand what agents can and cannot do to avoid over/under-reliance. A solution to this problem is adjusting human reliance through communication using reliance calibration cues (RCCs) to help humans assess agents' capabilities. Previous studies typically attempted to calibrate reliance by continuously presenting RCCs, and when an agent should provide RCCs remains an open question. To answer this, we propose Pred-RC, a method for selectively providing RCCs. Pred-RC uses a cognitive reliance model to predict whether a human will assign a task to an agent. By comparing the prediction results for both cases with and without an RCC, Pred-RC evaluates the influence of the RCC on human reliance. We tested Pred-RC in a human-AI collaboration task and found that it can successfully calibrate human reliance with a reduced number of RCCs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (xai). IEEE Access, 6, 52138-52160. doi: 10.1109/ACCESS.2018.2870052
  2. (2020, jan). Trust-aware decision making for human-robot collaboration: Model learning and planning. J. Hum.-Robot Interact., 9(2). Retrieved from https://doi.org/10.1145/3359616 doi: 10.1145/3359616
  3. (2022). Detecting human trust calibration in automation: A convolutional neural network approach. IEEE Transactions on Human-Machine Systems, 52(4), 774-783. doi: 10.1109/THMS.2021.3137015
  4. (2014). A design methodology for trust cue calibration in cognitive agents. In R. Shumaker  S. Lackey (Eds.), Virtual, augmented and mixed reality. designing and developing virtual and augmented environments (pp. 251–262). Cham: Springer International Publishing.
  5. (2020). Adaptive task allocation in human-machine teams with trust and workload cognitive models. 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 3241-3246.
  6. (2012, 11). Default distrust? An fMRI investigation of the neural development of trust and cooperation. Social Cognitive and Affective Neuroscience, 9(4), 395-402. Retrieved from https://doi.org/10.1093/scan/nss144 doi: 10.1093/scan/nss144
  7. (2022). Explaining intelligent agent’s future motion on basis of vocabulary learning with human goal inference. IEEE Access, 10, 54336-54347. doi: 10.1109/ACCESS.2022.3176104
  8. (2019). A comprehensive study for robot navigation techniques. Cogent Engineering, 6(1), 1632046. Retrieved from https://doi.org/10.1080/23311916.2019.1632046 doi: 10.1080/23311916.2019.1632046
  9. (2017). On calibration of modern neural networks. In Proceedings of the 34th international conference on machine learning - volume 70 (p. 1321-1330). JMLR.org.
  10. (2022). A survey on vision transformer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1-1. doi: 10.1109/TPAMI.2022.3152247
  11. (2017). Supportingtrust in autonomous driving. In Proceedings of the 22nd international conference on intelligent user interfaces (p. 319-329). New York, NY, USA: Association for Computing Machinery. Retrieved from https://doi.org/10.1145/3025171.3025198 doi: 10.1145/3025171.3025198
  12. (2013). Presenting system uncertainty in automotive uis for supporting trust calibration in autonomous driving. In Proceedings of the 5th international conference on automotive user interfaces and interactive vehicular applications (p. 210-217). New York, NY, USA: Association for Computing Machinery. Retrieved from https://doi.org/10.1145/2516540.2516554 doi: 10.1145/2516540.2516554
  13. (2015, May 23). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. doi: 10.1177/0018720814547570
  14. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53-71. doi: 10.1207/S15327566IJCE0401_04
  15. (2018, 29–31 Oct). Scalable deep reinforcement learning for vision-based robotic manipulation. In A. Billard, A. Dragan, J. Peters,  J. Morimoto (Eds.), Proceedings of the 2nd conference on robot learning (Vol. 87, pp. 651–673). PMLR.
  16. (2018, 01). Introduction matters: Manipulating trust in automation and reliance in automated driving. Applied Ergonomics, 66, 18-31. doi: 10.1016/j.apergo.2017.07.006
  17. (2018). Getting to know each other: The role of social dialogue in recovery from errors in social robots. In Proceedings of the 2018 acm/ieee international conference on human-robot interaction (p. 344-351). New York, NY, USA: Association for Computing Machinery. Retrieved from https://doi.org/10.1145/3171221.3171258 doi: 10.1145/3171221.3171258
  18. (2000). Measuring human-computer trust. In Proceedings of the 11 th australasian conference on information systems (pp. 6–8).
  19. (2021, October). Unified questioner transformer for descriptive question generation in goal-oriented visual dialogue. In Proceedings of the ieee/cvf international conference on computer vision (iccv) (p. 1898-1907).
  20. (2006, 02). Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Human factors, 48, 656-65. doi: 10.1518/001872006779166334
  21. (2020, 08). Earthquake transformer-an attentive deep-learning model for simultaneous earthquake detection and phase picking. Nature Communications, 11, 3952. doi: 10.1038/s41467-020-17591-w
  22. (2020). Effects of anthropomorphism and accountability on trust in human robot interaction. In Proceedings of the 2020 acm/ieee international conference on human-robot interaction (p. 33-42). New York, NY, USA: Association for Computing Machinery. Retrieved from https://doi.org/10.1145/3319502.3374839 doi: 10.1145/3319502.3374839
  23. (2020a, 02). Adaptive trust calibration for human-ai collaboration. PLOS ONE, 15(2), 1-20. Retrieved from https://doi.org/10.1371/journal.pone.0229132 doi: 10.1371/journal.pone.0229132
  24. (2020b). Empirical evaluations of framework for adaptive trust calibration in human-ai cooperation. IEEE Access, 8, 220335-220351. doi: 10.1109/ACCESS.2020.3042556
  25. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230-253. Retrieved from https://doi.org/10.1518/001872097778543886 doi: 10.1518/001872097778543886
  26. Rai, A.  (2020). Explainable ai: From black box to glass box. Journal of the Academy of Marketing Science, 48(1), 137–141.
  27. (2020). A review on human-computer interaction and intelligent robots. International Journal of Information Technology & Decision Making, 19(01), 5-47. Retrieved from https://doi.org/10.1142/S0219622019300052 doi: 10.1142/S0219622019300052
  28. (2015). Timing is key for robot trust repair. In A. Tapus, E. André, J.-C. Martin, F. Ferland,  M. Ammi (Eds.), Social robotics (pp. 574–583). Cham: Springer International Publishing.
  29. (2016). Overtrust of robots in emergency evacuation scenarios. In 2016 11th acm/ieee international conference on human-robot interaction (hri) (p. 101-108). doi: 10.1109/HRI.2016.7451740
  30. (2022). Trust and reliance in xai – distinguishing between attitudinal and behavioral measures. In Chi 2022 workshop on trust and reliance in ai-human teams. Retrieved from https://arxiv.org/abs/2203.12318 doi: 10.48550/ARXIV.2203.12318
  31. (2019). ”i don’t believe you”: Investigating the effects of robot trust violation and repair. In Proceedings of the 14th acm/ieee international conference on human-robot interaction (p. 57-65). IEEE Press.
  32. (2021). Trust-based route planning for automated vehicles. In Proceedings of the acm/ieee 12th international conference on cyber-physical systems (p. 1-10). New York, NY, USA: Association for Computing Machinery. Retrieved from https://doi.org/10.1145/3450267.3450529 doi: 10.1145/3450267.3450529
  33. (2018). The ripple effects of vulnerability: The effects of a robot’s vulnerable behavior on trust in human-robot teams. In Proceedings of the 2018 acm/ieee international conference on human-robot interaction (p. 178-186). New York, NY, USA: Association for Computing Machinery. Retrieved from https://doi.org/10.1145/3171221.3171275 doi: 10.1145/3171221.3171275
  34. (2017). Attention is all you need. In I. Guyon et al. (Eds.), Advances in neural information processing systems (Vol. 30). Curran Associates, Inc.
  35. (2003). Captcha: Using hard ai problems for security. In E. Biham (Ed.), Advances in cryptology — eurocrypt 2003 (pp. 294–311). Berlin, Heidelberg: Springer Berlin Heidelberg.
  36. (2018, 06). Gaze behaviour as a measure of trust in automated vehicles..
  37. (2012). You want me to trust a robot? the development of a human-robot interaction trust scale. International Journal of Social Robotics, 4, 235-248.
  38. (2020). Effect of confidence and explanation on accuracy and trust calibration in ai-assisted decision making. In Proceedings of the 2020 conference on fairness, accountability, and transparency (p. 295-305). New York, NY, USA: Association for Computing Machinery. Retrieved from https://doi.org/10.1145/3351095.3372852 doi: 10.1145/3351095.3372852
Citations (3)

Summary

We haven't generated a summary for this paper yet.