ESC-Eval: Evaluating Emotion Support Conversations in Large Language Models (2406.14952v3)
Abstract: Emotion Support Conversation (ESC) is a crucial application, which aims to reduce human stress, offer emotional guidance, and ultimately enhance human mental and physical well-being. With the advancement of LLMs, many researchers have employed LLMs as the ESC models. However, the evaluation of these LLM-based ESCs remains uncertain. Inspired by the awesome development of role-playing agents, we propose an ESC Evaluation framework (ESC-Eval), which uses a role-playing agent to interact with ESC models, followed by a manual evaluation of the interactive dialogues. In detail, we first re-organize 2,801 role-playing cards from seven existing datasets to define the roles of the role-playing agent. Second, we train a specific role-playing model called ESC-Role which behaves more like a confused person than GPT-4. Third, through ESC-Role and organized role cards, we systematically conduct experiments using 14 LLMs as the ESC models, including general AI-assistant LLMs (ChatGPT) and ESC-oriented LLMs (ExTES-Llama). We conduct comprehensive human annotations on interactive multi-turn dialogues of different ESC models. The results show that ESC-oriented LLMs exhibit superior ESC abilities compared to general AI-assistant LLMs, but there is still a gap behind human performance. Moreover, to automate the scoring process for future ESC models, we developed ESC-RANK, which trained on the annotated data, achieving a scoring performance surpassing 35 points of GPT-4. Our data and code are available at https://github.com/AIFlames/Esc-Eval.
- Haiquan Zhao (35 papers)
- Lingyu Li (7 papers)
- Shisong Chen (4 papers)
- Shuqi Kong (2 papers)
- Jiaan Wang (35 papers)
- Tianle Gu (14 papers)
- Yixu Wang (38 papers)
- Dandan Liang (3 papers)
- Zhixu Li (43 papers)
- Yanghua Xiao (151 papers)
- Yingchun Wang (24 papers)
- Kexin Huang (50 papers)
- Yan Teng (15 papers)
- Wang Jian (10 papers)