Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Harnessing the Power of Large Language Models for Empathetic Response Generation: Empirical Investigations and Improvements (2310.05140v4)

Published 8 Oct 2023 in cs.CL and cs.AI

Abstract: Empathetic dialogue is an indispensable part of building harmonious social relationships and contributes to the development of a helpful AI. Previous approaches are mainly based on fine small-scale LLMs. With the advent of ChatGPT, the application effect of LLMs in this field has attracted great attention. This work empirically investigates the performance of LLMs in generating empathetic responses and proposes three improvement methods of semantically similar in-context learning, two-stage interactive generation, and combination with the knowledge base. Extensive experiments show that LLMs can significantly benefit from our proposed methods and is able to achieve state-of-the-art performance in both automatic and human evaluations. Additionally, we explore the possibility of GPT-4 simulating human evaluators.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yushan Qian (4 papers)
  2. Wei-Nan Zhang (19 papers)
  3. Ting Liu (329 papers)
Citations (25)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets