Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MM-DFN: Multimodal Dynamic Fusion Network for Emotion Recognition in Conversations (2203.02385v1)

Published 4 Mar 2022 in cs.CL, cs.AI, and cs.MM

Abstract: Emotion Recognition in Conversations (ERC) has considerable prospects for developing empathetic machines. For multimodal ERC, it is vital to understand context and fuse modality information in conversations. Recent graph-based fusion methods generally aggregate multimodal information by exploring unimodal and cross-modal interactions in a graph. However, they accumulate redundant information at each layer, limiting the context understanding between modalities. In this paper, we propose a novel Multimodal Dynamic Fusion Network (MM-DFN) to recognize emotions by fully understanding multimodal conversational context. Specifically, we design a new graph-based dynamic fusion module to fuse multimodal contextual features in a conversation. The module reduces redundancy and enhances complementarity between modalities by capturing the dynamics of contextual information in different semantic spaces. Extensive experiments on two public benchmark datasets demonstrate the effectiveness and superiority of MM-DFN.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Dou Hu (16 papers)
  2. Xiaolong Hou (2 papers)
  3. Lingwei Wei (19 papers)
  4. Lianxin Jiang (7 papers)
  5. Yang Mo (11 papers)
Citations (100)