BERT-ERC: Fine-tuning BERT is Enough for Emotion Recognition in Conversation (2301.06745v1)
Abstract: Previous works on emotion recognition in conversation (ERC) follow a two-step paradigm, which can be summarized as first producing context-independent features via fine-tuning pretrained LLMs (PLMs) and then analyzing contextual information and dialogue structure information among the extracted features. However, we discover that this paradigm has several limitations. Accordingly, we propose a novel paradigm, i.e., exploring contextual information and dialogue structure information in the fine-tuning step, and adapting the PLM to the ERC task in terms of input text, classification structure, and training strategy. Furthermore, we develop our model BERT-ERC according to the proposed paradigm, which improves ERC performance in three aspects, namely suggestive text, fine-grained classification module, and two-stage training. Compared to existing methods, BERT-ERC achieves substantial improvement on four datasets, indicating its effectiveness and generalization capability. Besides, we also set up the limited resources scenario and the online prediction scenario to approximate real-world scenarios. Extensive experiments demonstrate that the proposed paradigm significantly outperforms the previous one and can be adapted to various scenes.
- Xiangyu Qin (1 paper)
- Zhiyu Wu (26 papers)
- Jinshi Cui (7 papers)
- Tingting Zhang (53 papers)
- Yanran Li (32 papers)
- Jian Luan (52 papers)
- Bin Wang (751 papers)
- Li Wang (470 papers)