Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MInD: Improving Multimodal Sentiment Analysis via Multimodal Information Disentanglement (2401.11818v2)

Published 22 Jan 2024 in cs.MM

Abstract: Learning effective joint representations has been a central task in multi-modal sentiment analysis. Previous works addressing this task focus on exploring sophisticated fusion techniques to enhance performance. However, the inherent heterogeneity of distinct modalities remains a core problem that brings challenges in fusing and coordinating the multi-modal signals at both the representational level and the informational level, impeding the full exploitation of multi-modal information. To address this problem, we propose the Multi-modal Information Disentanglement (MInD) method, which decomposes the multi-modal inputs into modality-invariant and modality-specific components through a shared encoder and multiple private encoders. Furthermore, by explicitly training generated noise in an adversarial manner, MInD is able to isolate uninformativeness, thus improves the learned representations. Therefore, the proposed disentangled decomposition allows for a fusion process that is simpler than alternative methods and results in improved performance. Experimental evaluations conducted on representative benchmark datasets demonstrate MInD's effectiveness in both multi-modal emotion recognition and multi-modal humor detection tasks. Code will be released upon acceptance of the paper.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Weichen Dai (14 papers)
  2. Xingyu Li (104 papers)
  3. Pengbo Hu (6 papers)
  4. Zeyu Wang (137 papers)
  5. Ji Qi (61 papers)
  6. Jianlin Peng (2 papers)
  7. Yi Zhou (438 papers)
Citations (1)