Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BERT-ERC: Fine-tuning BERT is Enough for Emotion Recognition in Conversation (2301.06745v1)

Published 17 Jan 2023 in cs.CL and cs.AI

Abstract: Previous works on emotion recognition in conversation (ERC) follow a two-step paradigm, which can be summarized as first producing context-independent features via fine-tuning pretrained LLMs (PLMs) and then analyzing contextual information and dialogue structure information among the extracted features. However, we discover that this paradigm has several limitations. Accordingly, we propose a novel paradigm, i.e., exploring contextual information and dialogue structure information in the fine-tuning step, and adapting the PLM to the ERC task in terms of input text, classification structure, and training strategy. Furthermore, we develop our model BERT-ERC according to the proposed paradigm, which improves ERC performance in three aspects, namely suggestive text, fine-grained classification module, and two-stage training. Compared to existing methods, BERT-ERC achieves substantial improvement on four datasets, indicating its effectiveness and generalization capability. Besides, we also set up the limited resources scenario and the online prediction scenario to approximate real-world scenarios. Extensive experiments demonstrate that the proposed paradigm significantly outperforms the previous one and can be adapted to various scenes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xiangyu Qin (1 paper)
  2. Zhiyu Wu (26 papers)
  3. Jinshi Cui (7 papers)
  4. Tingting Zhang (53 papers)
  5. Yanran Li (32 papers)
  6. Jian Luan (52 papers)
  7. Bin Wang (751 papers)
  8. Li Wang (470 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.