Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

EffMulti: Efficiently Modeling Complex Multimodal Interactions for Emotion Analysis (2212.08661v1)

Published 16 Dec 2022 in cs.LG, cs.AI, and cs.CL

Abstract: Humans are skilled in reading the interlocutor's emotion from multimodal signals, including spoken words, simultaneous speech, and facial expressions. It is still a challenge to effectively decode emotions from the complex interactions of multimodal signals. In this paper, we design three kinds of multimodal latent representations to refine the emotion analysis process and capture complex multimodal interactions from different views, including a intact three-modal integrating representation, a modality-shared representation, and three modality-individual representations. Then, a modality-semantic hierarchical fusion is proposed to reasonably incorporate these representations into a comprehensive interaction representation. The experimental results demonstrate that our EffMulti outperforms the state-of-the-art methods. The compelling performance benefits from its well-designed framework with ease of implementation, lower computing complexity, and less trainable parameters.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Feng Qiu (72 papers)
  2. Chengyang Xie (1 paper)
  3. Yu Ding (70 papers)
  4. Wanzeng Kong (13 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.