Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

EmoBench: Evaluating the Emotional Intelligence of Large Language Models (2402.12071v3)

Published 19 Feb 2024 in cs.CL and cs.AI

Abstract: Recent advances in LLMs have highlighted the need for robust, comprehensive, and challenging benchmarks. Yet, research on evaluating their Emotional Intelligence (EI) is considerably limited. Existing benchmarks have two major shortcomings: first, they mainly focus on emotion recognition, neglecting essential EI capabilities such as emotion regulation and thought facilitation through emotion understanding; second, they are primarily constructed from existing datasets, which include frequent patterns, explicit information, and annotation errors, leading to unreliable evaluation. We propose EmoBench, a benchmark that draws upon established psychological theories and proposes a comprehensive definition for machine EI, including Emotional Understanding and Emotional Application. EmoBench includes a set of 400 hand-crafted questions in English and Chinese, which are meticulously designed to require thorough reasoning and understanding. Our findings reveal a considerable gap between the EI of existing LLMs and the average human, highlighting a promising direction for future research. Our code and data are publicly available at https://github.com/Sahandfer/EmoBench.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Sahand Sabour (13 papers)
  2. Siyang Liu (25 papers)
  3. Zheyuan Zhang (61 papers)
  4. June M. Liu (5 papers)
  5. Jinfeng Zhou (15 papers)
  6. Alvionna S. Sunaryo (1 paper)
  7. Juanzi Li (144 papers)
  8. Tatia M. C. Lee (4 papers)
  9. Rada Mihalcea (131 papers)
  10. Minlie Huang (225 papers)
Citations (6)