Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GraphMFT: A Graph Network based Multimodal Fusion Technique for Emotion Recognition in Conversation (2208.00339v5)

Published 31 Jul 2022 in cs.MM

Abstract: Multimodal machine learning is an emerging area of research, which has received a great deal of scholarly attention in recent years. Up to now, there are few studies on multimodal Emotion Recognition in Conversation (ERC). Since Graph Neural Networks (GNNs) possess the powerful capacity of relational modeling, they have an inherent advantage in the field of multimodal learning. GNNs leverage the graph constructed from multimodal data to perform intra- and inter-modal information interaction, which effectively facilitates the integration and complementation of multimodal data. In this work, we propose a novel Graph network based Multimodal Fusion Technique (GraphMFT) for emotion recognition in conversation. Multimodal data can be modeled as a graph, where each data object is regarded as a node, and both intra- and inter-modal dependencies existing between data objects can be regarded as edges. GraphMFT utilizes multiple improved graph attention networks to capture intra-modal contextual information and inter-modal complementary information. In addition, the proposed GraphMFT attempts to address the challenges of existing graph-based multimodal conversational emotion recognition models such as MMGCN. Empirical results on two public multimodal datasets reveal that our model outperforms the State-Of-The-Art (SOTA) approaches with the accuracy of 67.90% and 61.30%.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. doi:10.1145/1957656.1957669.
  2. doi:10.1007/s10462-021-10030-2.
  3. doi:10.1016/j.knosys.2019.104886.
  4. doi:10.18653/v1/N19-1037.
  5. doi:10.1609/aaai.v33i01.33016818.
  6. doi:10.18653/v1/2021.acl-long.123.
  7. doi:10.18653/v1/2021.acl-long.547.
  8. doi:10.1037/0003-066X.46.8.819.
  9. doi:10.18653/v1/D19-1015.
  10. doi:10.18653/v1/2020.emnlp-main.597.
  11. doi:10.18653/v1/D19-1016.
  12. doi:10.24963/ijcai.2019/752.
  13. doi:10.18653/v1/2021.acl-long.440.
  14. doi:10.1109/TNNLS.2020.2978386.
  15. doi:10.1145/3474085.3475583.
  16. doi:10.5555/3524938.3525099.
  17. doi:10.1609/aaai.v35i15.17625.
  18. doi:10.1016/j.inffus.2017.02.003.
  19. doi:10.18653/v1/N18-1193.
  20. doi:10.18653/v1/D18-1280.
  21. doi:10.1609/aaai.v32i1.12021.
  22. doi:10.18653/v1/P17-1081.
  23. doi:10.1109/TPAMI.2018.2798607.
  24. doi:10.1145/3394171.3413690.
  25. doi:10.1145/3197517.3201357.
  26. doi:10.18653/v1/2020.challengehml-1.3.
  27. doi:10.18653/v1/W18-3303.
  28. doi:10.1109/ICASSP40776.2020.9053012.
  29. doi:10.1145/3136755.3136801.
  30. doi:10.1145/3462244.3479919.
  31. doi:10.1109/CVPR.2017.243.
  32. doi:10.1016/j.specom.2011.01.011.
  33. doi:10.3115/v1/D14-1181.
  34. doi:10.1109/TPAMI.2021.3074057.
  35. doi:10.18653/v1/D17-1115.
  36. doi:10.18653/v1/p19-1050.
  37. doi:10.1007/s10579-008-9076-6.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jiang Li (48 papers)
  2. Xiaoping Wang (56 papers)
  3. Guoqing Lv (4 papers)
  4. Zhigang Zeng (28 papers)
Citations (25)

Summary

We haven't generated a summary for this paper yet.