Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Changing human's impression of empathy from agent by verbalizing agent's position (2403.14557v1)

Published 21 Mar 2024 in cs.HC

Abstract: As anthropomorphic agents (AI and robots) are increasingly used in society, empathy and trust between people and agents are becoming increasingly important. A better understanding of agents by people will help to improve the problems caused by the future use of agents in society. In the past, there has been a focus on the importance of self-disclosure and the relationship between agents and humans in their interactions. In this study, we focused on the attributes of self-disclosure and the relationship between agents and people. An experiment was conducted to investigate hypotheses on trust and empathy with agents through six attributes of self-disclosure (opinions and attitudes, hobbies, work, money, personality, and body) and through competitive and cooperative relationships before a robotic agent performs a joint task. The experiment consisted of two between-participant factors: six levels of self-disclosure attributes and two levels of relationship with the agent. The results showed that the two factors had no effect on trust in the agent, but there was statistical significance for the attribute of self-disclosure regarding a person's empathy toward the agent. In addition, statistical significance was found regarding the agent's ability to empathize with a person as perceived by the person only in the case where the type of relationship, competitive or cooperative, was presented. The results of this study could lead to an effective method for building relationships with agents, which are increasingly used in society.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (24)
  1. A. D. Kaplan, T. T. Kessler, J. C. Brill, and P. A. Hancock, “Trust in artificial intelligence: Meta-analytic findings,” Human Factors, vol. 65, no. 2, pp. 337–359, 2023, pMID: 34048287. [Online]. Available: https://doi.org/10.1177/00187208211013988
  2. T. Kähkönen, K. Blomqvist, N. Gillespie, and M. Vanhala, “Employee trust repair: A systematic review of 20 years of empirical research and future research directions,” Journal of Business Research, vol. 130, pp. 98–109, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S014829632100179X
  3. H. Bunting, J. Gaskell, and G. Stoker, “Trust, mistrust and distrust: A gendered perspective on meanings and measurements,” Frontiers in Political Science, vol. 3, 2021. [Online]. Available: https://www.frontiersin.org/articles/10.3389/fpos.2021.642129
  4. T. Nomura, T. Kanda, H. Kidokoro, Y. Suehiro, and S. Yamada, “Why do children abuse robots?” Interaction Studies, vol. 17, no. 3, pp. 347–369, 2016.
  5. S. D. Preston and F. B. M. de Waal, “Empathy: Its ultimate and proximate bases,” Behavioral and Brain Sciences, vol. 25, no. 1, p. 1–20, 2002.
  6. O. Gillath, T. Ai, M. S. Branicky, S. Keshmiri, R. B. Davison, and R. Spaulding, “Attachment and trust in artificial intelligence,” Computers in Human Behavior, vol. 115, p. 106607, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S074756322030354X
  7. O. Asan, A. E. Bayrak, and A. Choudhury, “Artificial intelligence and human trust in healthcare: Focus on clinicians,” J Med Internet Res, vol. 22, no. 6, p. e15154, Jun 2020. [Online]. Available: https://doi.org/10.2196/15154
  8. A. Maehigashi, T. Tsumura, and S. Yamada, “Effects of beep-sound timings on trust dynamics in human-robot interaction,” in Social Robotics, F. Cavallo, J.-J. Cabibihan, L. Fiorini, A. Sorrentino, H. He, X. Liu, Y. Matsumoto, and S. S. Ge, Eds.   Cham: Springer Nature Switzerland, 2022, pp. 652–662.
  9. ——, “Experimental investigation of trust in anthropomorphic agents as task partners,” in Proceedings of the 10th International Conference on Human-Agent Interaction, ser. HAI ’22.   New York, NY, USA: Association for Computing Machinery, 2022, p. 302–305. [Online]. Available: https://doi.org/10.1145/3527188.3563921
  10. A. Maehigashi, “The nature of trust in communication robots: Through comparison with trusts in other people and ai systems,” in 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2022, pp. 900–903.
  11. K. Okamura and S. Yamada, “Adaptive trust calibration for human-AI collaboration,” PLOS ONE, vol. 15, no. 2, pp. 1–20, 2020.
  12. ——, “Empirical evaluations of framework for adaptive trust calibration in human-ai cooperation,” IEEE Access, vol. 8, pp. 220 335–220 351, 2020.
  13. T. Tsumura and S. Yamada, “Influence of agent’s self-disclosure on human empathy,” PLOS ONE, vol. 18, no. 5, pp. 1–24, 05 2023. [Online]. Available: https://doi.org/10.1371/journal.pone.0283955
  14. ——, “Influence of anthropomorphic agent on human empathy through games,” IEEE Access, vol. 11, pp. 40 412–40 429, 2023.
  15. A. Paiva, I. Leite, H. Boukricha, and I. Wachsmuth, “Empathy in virtual agents and robots: A survey,” ACM Trans. Interact. Intell. Syst., vol. 7, no. 3, 2017.
  16. A. R. Rahmanti, H.-C. Yang, B. S. Bintoro, A. A. Nursetyo, M. S. Muhtar, S. Syed-Abdul, and Y.-C. J. Li, “Slimme, a chatbot with artificial empathy for personal weight management: System design and finding,” Frontiers in Nutrition, vol. 9, 2022.
  17. D. A. Gómez Jáuregui, F. Dollack, and M. Perusquía-Hernández, “Robot mirroring: Improving well-being by fostering empathy with an artificial agent representing the self,” in 2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), 2021, pp. 1–7.
  18. W. Pan, B. Feng, V. S. Wingate, and S. Li, “What to say when seeking support online: A comparison among different levels of self-disclosure,” Frontiers in Psychology, vol. 11, 2020.
  19. M. I. Ruissen and E. R. A. de Bruijn, “Competitive game play attenuates self-other integration during joint task performance,” Frontiers in Psychology, vol. 7, 2016.
  20. N. L. Collins and L. C. Miller, “Self-disclosure and liking: a meta-analytic review,” Psychol Bull, vol. 116, no. 3, pp. 457–475, Nov. 1994.
  21. J. Meng and Y. N. Dai, “Emotional Support from AI Chatbots: Should a Supportive Partner Self-Disclose or Not?” Journal of Computer-Mediated Communication, vol. 26, no. 4, pp. 207–222, 05 2021. [Online]. Available: https://doi.org/10.1093/jcmc/zmab005
  22. R. Davis, “Web-based administration of a personality questionnaire: Comparison with traditional methods,” Behavior Research Methods, Instruments, & Computers, vol. 31, pp. 572–577, 1999.
  23. D. Ullman and B. F. Malle, “Measuring gains and losses in human-robot trust: Evidence for differentiable components of trust,” in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), ser. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2019, pp. 618–619. [Online]. Available: https://doi.org/10.1109/HRI.2019.8673154
  24. S. Y. X. Komiak and I. Benbasat, “The effects of personalizaion and familiarity on trust and adoption of recommendation agents,” MIS Q., vol. 30, pp. 941–960, 2006.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Takahiro Tsumura (8 papers)
  2. Seiji Yamada (26 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets