Investigating Large Language Models' Perception of Emotion Using Appraisal Theory (2310.04450v1)
Abstract: LLMs (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMs' responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models.
- M. Binz and E. Schulz, “Using cognitive psychology to understand gpt-3,” Proceedings of the National Academy of Sciences, vol. 120, no. 6, p. e2218523120, 2023.
- S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y. T. Lee, Y. Li, S. Lundberg, et al., “Sparks of artificial general intelligence: Early experiments with gpt-4,” arXiv preprint arXiv:2303.12712, 2023.
- M. Kosinski, “Theory of mind may have spontaneously emerged in large language models,” arXiv preprint arXiv:2302.02083, 2023.
- Oxford University Press on Demand, 1991.
- A. Moors, P. C. Ellsworth, K. R. Scherer, and N. H. Frijda, “Appraisal theories of emotion: State of the art and future development,” Emotion Review, vol. 5, no. 2, pp. 119–124, 2013.
- P. Ekman et al., “Basic emotions,” Handbook of cognition and emotion, vol. 98, no. 45-60, p. 16, 1999.
- A. R. Damasio, “The somatic marker hypothesis and the possible functions of the prefrontal cortex,” Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, vol. 351, no. 1346, pp. 1413–1420, 1996.
- J. A. Russell, “A circumplex model of affect.,” Journal of personality and social psychology, vol. 39, no. 6, p. 1161, 1980.
- L. F. Barrett, “The theory of constructed emotion: an active inference account of interoception and categorization,” Social cognitive and affective neuroscience, vol. 12, no. 1, pp. 1–23, 2017.
- M. Perrez and M. Reicherts, “Stress, coping, and health: A situation-behavior approach: Theory, methods, applications,” (No Title), 1992.
- J. Gratch and S. Marsella, “Evaluating a computational model of emotion,” Autonomous Agents and Multi-Agent Systems, vol. 11, pp. 23–43, 2005.
- T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
- O. AI, “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774, 2023.
- B. Peng, C. Li, P. He, M. Galley, and J. Gao, “Instruction tuning with gpt-4,” arXiv preprint arXiv:2304.03277, 2023.
- Columbia University Press, 1960.
- C. A. Smith, R. S. Lazarus, et al., “Emotion and adaptation,” Handbook of personality: Theory and research, vol. 21, pp. 609–637, 1990.
- M. Seligman, “P.(1975). helplessness: On depression, development, and death,” Friedman, San Francisco, 1972.
- C. Harmon-Jones, B. Bastian, and E. Harmon-Jones, “The discrete emotions questionnaire: A new tool for measuring state self-reported emotions,” PloS one, vol. 11, no. 8, p. e0159915, 2016.
- K. R. Scherer, “Evidence for the existence of emotion dispositions and the effects of appraisal bias.,” Emotion, vol. 21, no. 6, p. 1224, 2021.
- M. Miotto, N. Rossberg, and B. Kleinberg, “Who is gpt-3? an exploration of personality, values and demographics,” arXiv preprint arXiv:2209.14338, 2022.
- L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al., “Training language models to follow instructions with human feedback,” Advances in Neural Information Processing Systems, vol. 35, pp. 27730–27744, 2022.
- M. Bommarito II and D. M. Katz, “Gpt takes the bar exam,” arXiv preprint arXiv:2212.14402, 2022.
- X. Li, Y. Li, L. Liu, L. Bing, and S. Joty, “Is gpt-3 a psychopath? evaluating large language models from a psychological perspective,” arXiv preprint arXiv:2212.10529, 2022.
- Nutchanon Yongsatianchot (6 papers)
- Parisa Ghanad Torshizi (2 papers)
- Stacy Marsella (11 papers)