Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SoulChat: Improving LLMs' Empathy, Listening, and Comfort Abilities through Fine-tuning with Multi-turn Empathy Conversations (2311.00273v1)

Published 1 Nov 2023 in cs.CL

Abstract: LLMs have been widely applied in various fields due to their excellent capability for memorizing knowledge and chain of thought (CoT). When these LLMs are applied in the field of psychological counseling, they often rush to provide universal advice. However, when users seek psychological support, they need to gain empathy, trust, understanding and comfort, rather than just reasonable advice. To this end, we constructed a multi-turn empathetic conversation dataset of more than 2 million samples, in which the input is the multi-turn conversation context, and the target is empathetic responses that cover expressions such as questioning, comfort, recognition, listening, trust, emotional support, etc. Experiments have shown that the empathy ability of LLMs can be significantly enhanced when finetuning by using multi-turn dialogue history and responses that are closer to the expression of a psychological consultant.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yirong Chen (11 papers)
  2. Xiaofen Xing (29 papers)
  3. Jingkai Lin (2 papers)
  4. Huimin Zheng (6 papers)
  5. Zhenyu Wang (150 papers)
  6. Qi Liu (485 papers)
  7. Xiangmin Xu (54 papers)
Citations (28)
Youtube Logo Streamline Icon: https://streamlinehq.com