Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Making an agent's trust stable in a series of success and failure tasks through empathy (2306.09447v1)

Published 15 Jun 2023 in cs.HC

Abstract: As AI technology develops, trust in AI agents is becoming more important for more AI applications in human society. Possible ways to improve the trust relationship include empathy, success-failure series, and capability (performance). Appropriate trust is less likely to cause deviations between actual and ideal performance. In this study, we focus on the agent's empathy and success-failure series to increase trust in AI agents. We experimentally examine the effect of empathy from agent to person on changes in trust over time. The experiment was conducted with a two-factor mixed design: empathy (available, not available) and success-failure series (phase 1 to phase 5). An analysis of variance (ANOVA) was conducted using data from 198 participants. The results showed an interaction between the empathy factor and the success-failure series factor, with trust in the agent stabilizing when empathy was present. This result supports our hypothesis. This study shows that designing AI agents to be empathetic is an important factor for trust and helps humans build appropriate trust relationships with AI agents.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians. J Med Internet Res 22, 6 (19 Jun 2020), e15154. https://doi.org/10.2196/15154
  2. R.N Davis. 1999. Web-based administration of a personality questionnaire: Comparison with traditional methods. Behavior Research Methods, Instruments, & Computers 31 (1999), 572–577. https://doi.org/10.3758/BF03200737
  3. Attachment and trust in artificial intelligence. Computers in Human Behavior 115 (2021), 106607. https://doi.org/10.1016/j.chb.2020.106607
  4. Robot mirroring: Improving well-being by fostering empathy with an artificial agent representing the self. In 2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). 1–7. https://doi.org/10.1109/ACIIW52867.2021.9666320
  5. Jaana Hallamaa and Taina Kalliokoski. 2022. AI Ethics as Applied Ethics. Frontiers in Computer Science 4 (2022). https://doi.org/10.3389/fcomp.2022.776837
  6. The Effects of Healthcare Robot Empathy Statements and Head Nodding on Trust and Satisfaction: A Video Study. J. Hum.-Robot Interact. 12, 1, Article 4 (feb 2023), 21 pages. https://doi.org/10.1145/3549534
  7. It Seems Smart, but It Acts Stupid: Development of Trust in AI Advice in a Repeated Legal Decision-Making Task. In Proceedings of the 28th International Conference on Intelligent User Interfaces (Sydney, NSW, Australia) (IUI ’23). Association for Computing Machinery, New York, NY, USA, 528–539. https://doi.org/10.1145/3581641.3584058
  8. Trust in Artificial Intelligence: Meta-Analytic Findings. Human Factors 65, 2 (2023), 337–359. https://doi.org/10.1177/00187208211013988 arXiv:https://doi.org/10.1177/00187208211013988 PMID: 34048287.
  9. Sherrie Y. X. Komiak and Izak Benbasat. 2006. The Effects of Personalizaion and Familiarity on Trust and Adoption of Recommendation Agents. MIS Q. 30 (2006), 941–960.
  10. Artificial Intelligence in Healthcare: Review, Ethics, Trust Challenges & Future Research Directions. Engineering Applications of Artificial Intelligence 120 (2023), 105894. https://doi.org/10.1016/j.engappai.2023.105894
  11. Min Kyung Lee and Katherine Rich. 2021. Who Is Included in Human Perceptions of AI?: Trust and Perceived Fairness around Healthcare AI and Cultural Mistrust. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 138, 14 pages. https://doi.org/10.1145/3411764.3445570
  12. Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 759, 19 pages. https://doi.org/10.1145/3544548.3581058
  13. Akihiro Maehigashi. 2022. The Nature of Trust in Communication Robots: Through Comparison with Trusts in Other People and AI systems. In 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 900–903. https://doi.org/10.1109/HRI53351.2022.9889521
  14. Effects of Beep-Sound Timings on Trust Dynamics in Human-Robot Interaction. In Social Robotics, Filippo Cavallo, John-John Cabibihan, Laura Fiorini, Alessandra Sorrentino, Hongsheng He, Xiaorui Liu, Yoshio Matsumoto, and Shuzhi Sam Ge (Eds.). Springer Nature Switzerland, Cham, 652–662.
  15. Experimental Investigation of Trust in Anthropomorphic Agents as Task Partners. In Proceedings of the 10th International Conference on Human-Agent Interaction (Christchurch, New Zealand) (HAI ’22). Association for Computing Machinery, New York, NY, USA, 302–305. https://doi.org/10.1145/3527188.3563921
  16. Why do children abuse robots? Interaction Studies 17, 3 (2016), 347–369. https://doi.org/10.1075/is.17.3.02nom
  17. Experimental investigation into influence of negative attitudes toward robots on human–robot interaction. AI Soc. 20 (2006), 138–150. Issue 2. https://doi.org/10.1007/s00146-005-0012-7
  18. Prediction of Human Behavior in Human–Robot Interaction Using Psychological Scales for Anxiety and Negative Attitudes Toward Robots. IEEE Transactions on Robotics 24, 2 (2008), 442–451. https://doi.org/10.1109/TRO.2007.914004
  19. Kazuo Okamura and Seiji Yamada. 2020a. Adaptive trust calibration for human-AI collaboration. PLOS ONE 15, 2 (2020), 1–20. https://doi.org/10.1371/journal.pone.0229132
  20. Kazuo Okamura and Seiji Yamada. 2020b. Empirical Evaluations of Framework for Adaptive Trust Calibration in Human-AI Cooperation. IEEE Access 8 (2020), 220335–220351. https://doi.org/10.1109/ACCESS.2020.3042556
  21. Trust Toward Robots and Artificial Intelligence: An Experimental Approach to Human–Technology Interactions Online. Frontiers in Psychology 11 (2020). https://doi.org/10.3389/fpsyg.2020.568256
  22. B. L. Omdahl. 1995. Cognitive appraisal, emotion, and empathy (1 ed.). Psychology Press, New York. https://doi.org/10.4324/9781315806556
  23. Ana Paiva. 2011. Empathy in Social Agents. International Journal of Virtual Reality 10, 1 (2011), 1–4. https://doi.org/10.20870/IJVR.2011.10.1.2794
  24. Caring for Agents and Agents that Care: Building Empathic Relations with Synthetic Agents. Autonomous Agents and Multiagent Systems, International Joint Conference on 2 (2004), 194–201. https://doi.org/10.1109/AAMAS.2004.82
  25. Empathy in Virtual Agents and Robots: A Survey. ACM Trans. Interact. Intell. Syst. 7, 3, Article 11 (2017), 40 pages. https://doi.org/10.1145/2912150
  26. Stephanie D. Preston and Frans B. M. de Waal. 2002. Empathy: Its ultimate and proximate bases. Behavioral and Brain Sciences 25, 1 (2002), 1–20. https://doi.org/10.1017/S0140525X02000018
  27. SlimMe, a Chatbot With Artificial Empathy for Personal Weight Management: System Design and Finding. Frontiers in Nutrition 9 (2022). https://doi.org/10.3389/fnut.2022.870775
  28. Byron Reeves and Clifford Nass. 1996. The Media Equation: How People Treat Computers, Television, and New Media like Real People and Places. Cambridge University Press, USA.
  29. Mark Ryan. 2020. In AI We Trust: Ethics, Artificial Intelligence, and Reliability. Science and Engineering Ethics 26, 5 (2020), 2749–2767. https://doi.org/10.1007/s11948-020-00228-y
  30. Socially Assistive Robots as Storytellers That Elicit Empathy. J. Hum.-Robot Interact. 11, 4, Article 46 (sep 2022), 29 pages. https://doi.org/10.1145/3538409
  31. Takahiro Tsumura and Seiji Yamada. 2023a. Influence of agent’s self-disclosure on human empathy. PLOS ONE 18, 5 (05 2023), 1–24. https://doi.org/10.1371/journal.pone.0283955
  32. Takahiro Tsumura and Seiji Yamada. 2023b. Influence of Anthropomorphic Agent on Human Empathy Through Games. IEEE Access 11 (2023), 40412–40429. https://doi.org/10.1109/ACCESS.2023.3269301
  33. Daniel Ullman and Bertram F. Malle. 2019. Measuring Gains and Losses in Human-Robot Trust: Evidence for Differentiable Components of Trust. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)). 618–619. https://doi.org/10.1109/HRI.2019.8673154
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Takahiro Tsumura (8 papers)
  2. Seiji Yamada (26 papers)