Exploring ChatGPT's Empathic Abilities (2308.03527v3)
Abstract: Empathy is often understood as the ability to share and understand another individual's state of mind or emotion. With the increasing use of chatbots in various domains, e.g., children seeking help with homework, individuals looking for medical advice, and people using the chatbot as a daily source of everyday companionship, the importance of empathy in human-computer interaction has become more apparent. Therefore, our study investigates the extent to which ChatGPT based on GPT-3.5 can exhibit empathetic responses and emotional expressions. We analyzed the following three aspects: (1) understanding and expressing emotions, (2) parallel emotional response, and (3) empathic personality. Thus, we not only evaluate ChatGPT on various empathy aspects and compare it with human behavior but also show a possible way to analyze the empathy of chatbots in general. Our results show, that in 91.7% of the cases, ChatGPT was able to correctly identify emotions and produces appropriate answers. In conversations, ChatGPT reacted with a parallel emotion in 70.7% of cases. The empathic capabilities of ChatGPT were evaluated using a set of five questionnaires covering different aspects of empathy. Even though the results show, that the scores of ChatGPT are still worse than the average of healthy humans, it scores better than people who have been diagnosed with Asperger syndrome / high-functioning autism.
- C. Pelau, D.-C. Dabija, and I. Ene, “What Makes an AI Device Human-Like? The Role of Interaction Quality, Empathy and Perceived Psychological Anthropomorphic Characteristics in the Acceptance of Artificial Intelligence in the Service Industry,” Computers in Human Behavior, vol. 122, p. 106855, 2021.
- D. Adiwardana, M.-T. Luong, D. R. So, J. Hall, N. Fiedel, R. Thoppilan, Z. Yang, A. Kulshreshtha, G. Nemade, Y. Lu, and Q. V. Le, “Towards a Human-Like Open-Domain Chatbot,” ArXiv Preprint ArXiv:2001.09977, 2020.
- M. Dibitonto, K. Leszczynska, F. Tazzi, and C. M. Medaglia, “Chatbot in a Campus Environment: Design of LiSA, a Virtual Assistant to Help Students in Their University Life,” in Human-Computer Interaction. Interaction Technologies: 20th International Conference, HCI International 2018, Las Vegas, NV, USA, July 15–20, 2018, Proceedings, Part III 20. Springer, 2018, pp. 103–116.
- D. Arteaga, J. Arenas, F. Paz, M. Tupia, and M. Bruzza, “Design of Information System Architecture for the Recommendation of Tourist Sites in the City of Manta, Ecuador through a Chatbot,” in 2019 14th Iberian Conference on Information Systems and Technologies (CISTI). IEEE, 2019, pp. 1–6.
- C. Falala-Séchet, L. Antoine, I. Thiriez, and C. Bungener, “OWLIE: A Chatbot that Provides Emotional Support for Coping With Psychological Difficulties,” in Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, 2019, pp. 236–237.
- V. Taecharungroj, ““What Can ChatGPT Do?” Analyzing Early Reactions to the Innovative AI Chatbot on Twitter,” Big Data and Cognitive Computing, vol. 7, no. 1, p. 35, 2023.
- M. Wiesner and R. K. Silbereisen, “Trajectories of Delinquent Behaviour in Adolescence and their Covariates: Relations with Initial and Time-Averaged Factors,” Journal of Adolescence, vol. 26, no. 6, pp. 753–771, 2003.
- N. Eisenberg, A. Sadovsky, and T. L. Spinrad, “Associations of Emotion-Related Regulation with Language Skills, Emotion Knowledge, and Academic Outcomes,” New Directions for Child and Adolescent Development, vol. 2005, no. 109, pp. 109–118, 2005.
- X. Li, Y. Li, L. Liu, L. Bing, and S. Joty, “Is GPT-3 a Psychopath? Evaluating Large Language Models from a Psychological Perspective,” ArXiv E-Prints, 2022.
- D. N. Jones and D. L. Paulhus, “Introducing the Short Dark Triad (SD3) a Brief Measure of Dark Personality Traits,” Assessment, vol. 21, no. 1, pp. 28–41, 2014.
- N. M. McDonald and D. S. Messinger, “The Development of Empathy: How, When, and Why,” Moral behavior and free will: A neurobiological and philosophical approach, pp. 333–359, 2011.
- S. W. McQuiggan, J. L. Robison, R. Phillips, and J. C. Lester, “Modeling Parallel and Reactive Empathy in Virtual Agents: An Inductive Approach,” in Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems-Volume 1. Citeseer, 2008, pp. 167–174.
- D. Baidoo-Anu and L. Owusu Ansah, “Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning,” Available at SSRN 4337484, 2023.
- K. Jeblick, B. Schachtner, J. Dexl, A. Mittermeier, A. T. Stüber, J. Topalis, T. Weber, P. Wesp, B. Sabel, J. Ricke, and M. Ingrisch, “ChatGPT Makes Medicine Easy to Swallow: An Exploratory Case Study on Simplified Radiology Reports,” ArXiv E-Prints, 2022.
- W. Jiao, W. Wang, J.-t. Huang, X. Wang, and Z. Tu, “Is ChatGPT a Good Translator? A Preliminary Study,” ArXiv Preprint ArXiv:2301.08745, 2023.
- Natalie. (2023) What is ChatGPT? [Online]. Available: https://help.openai.com/en/articles/6783457-what-is-chatgpt
- T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, “Language Models are Few-Shot Learners,” CoRR, vol. abs/2005.14165, 2020.
- S. Mitrović, D. Andreoletti, and O. Ayoub, “ChatGPT or Human? Detect and Explain. Explaining Decisions of Machine Learning Model for Detecting Short ChatGPT-Generated Text,” arXiv preprint arXiv:2301.13852, 2023.
- T. Singer, “The Neuronal Basis and Ontogeny of Empathy and Mind Reading: Review of Literature and Implications for Future Research,” Neuroscience & Biobehavioral Reviews, vol. 30, no. 6, pp. 855–863, 2006.
- R. L. Reniers, R. Corcoran, R. Drake, N. M. Shryane, and B. A. Völlm, “The QCAE: A Questionnaire of Cognitive and Affective Empathy,” Journal of personality assessment, vol. 93, no. 1, pp. 84–95, 2011.
- A. Smith, “Cognitive Empathy and Emotional Empathy in Human Behavior and Evolution,” The Psychological Record, vol. 56, no. 1, pp. 3–21, 2006.
- E. J. Lawrence, P. Shaw, D. Baker, S. Baron-Cohen, and A. S. David, “Measuring Empathy: Reliability and Validity of the Empathy Quotient,” Psychological Medicine, vol. 34, no. 5, pp. 911–920, 2004.
- Ö. N. Yalçın, “Evaluating Empathy in Artificial Agents,” in 2019 8th International Conference on Affective Computing and Intelligent Interaction and Workshops. IEEE, 2019, pp. 1–7.
- Y.-J. Lee, C.-G. Lim, and H.-J. Choi, “Does GPT-3 Generate Empathetic Dialogues? A Novel In-Context Example Selection Method and Automatic Evaluation Metric for Empathetic Dialogue Generation,” in Proceedings of the 29th International Conference on Computational Linguistics, 2022, pp. 669–683.
- S. Baron-Cohen and S. Wheelwright, “The Empathy Quotient: an Investigation of Adults with Asperger Syndrome or High Functioning Autism, and Normal Sex Differences,” Journal of Autism and Developmental Disorders, vol. 34, pp. 163–175, 2004.
- R. Chadha. (2022) The Junto Emotion Wheel: What it is, why we designed it and how it can be used. [Online]. Available: https://www.thejuntoinstitute.com/emotion-wheels/
- E. Saravia, H.-C. T. Liu, Y.-H. Huang, J. Wu, and Y.-S. Chen, “CARER: Contextualized Affect Representations for Emotion Recognition,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2018.
- K. Rogers, I. Dziobek, J. Hassenstab, O. T. Wolf, and A. Convit, “Who Cares? Revisiting Empathy in Asperger Syndrome,” Journal of Autism and Developmental Disorders, vol. 37, pp. 709–715, 2007.
- H. Rashkin, E. M. Smith, M. Li, and Y.-L. Boureau, “Towards Empathetic Open-domain Conversation Models: A New Benchmark and Dataset,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019, pp. 5370–5381.
- J. D. M.-W. C. Kenton and L. K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” in Proceedings of NAACL-HLT, 2019, pp. 4171–4186.
- N. Reimers and I. Gurevych, “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11 2019.
- M. Davis, “A Multidimensional Approach to Individual Differences in Empathy,” JSAS Catalog of Selected Documents in Psychology, vol. 10, pp. 85–103, 01 1980.
- O. Lauterbach and D. Hosser, “Assessing Empathy in Prisoners - A Shortened Version of the Interpersonal Reactivity Index,” Swiss Journal of Psychology, vol. 66, pp. 91–101, 2007.
- A.-L. Gilet, N. Mella, J. Studer, D. Grühn, and G. Labouvie-vief, “Assessing Dispositional Empathy in Adults: A French Validation of the Interpersonal Reactivity Index (IRI),” Canadian Journal of Behavioural Science, vol. 45, pp. 42–48, 2013.
- R. N. Spreng, M. C. McKinnon, R. A. Mar, and B. Levine, “The Toronto Empathy Questionnaire: Scale Development and Initial Validation of a Factor-Analytic Solution to Multiple Empathy Measures,” Journal of Personality Assessment, vol. 91, no. 1, pp. 62–71, 2009.
- J. D. Brett, R. Becerra, M. T. Maybery, and D. A. Preece, “The Psychometric Assessment of Empathy: Development and Validation of the Perth Empathy Scale,” Assessment, 2022.
- P. Ekman, “Are There Basic Emotions?” Psychological Review, vol. 99 3, pp. 550–553, 1992.
- S. Baron-Cohen, S. Wheelwright, R. Skinner, J. Martin, and E. Clubley, “The Autism Spectrum Quotient (AQ): Evidence from Asperger Syndrome/High-Functioning Autism, Malesand Females, Scientists and Mathematicians,” Journal of Autism and Developmental Disorders, vol. 31, pp. 5–17, 2001.
- Kristina Schaaff (4 papers)
- Caroline Reinig (1 paper)
- Tim Schlippe (5 papers)