Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multilevel Transformer For Multimodal Emotion Recognition (2211.07711v2)

Published 26 Oct 2022 in cs.CL, cs.AI, and cs.LG

Abstract: Multimodal emotion recognition has attracted much attention recently. Fusing multiple modalities effectively with limited labeled data is a challenging task. Considering the success of pre-trained model and fine-grained nature of emotion expression, it is reasonable to take these two aspects into consideration. Unlike previous methods that mainly focus on one aspect, we introduce a novel multi-granularity framework, which combines fine-grained representation with pre-trained utterance-level representation. Inspired by Transformer TTS, we propose a multilevel transformer model to perform fine-grained multimodal emotion recognition. Specifically, we explore different methods to incorporate phoneme-level embedding with word-level embedding. To perform multi-granularity learning, we simply combine multilevel transformer model with Albert. Extensive experimental results show that both our multilevel transformer model and multi-granularity model outperform previous state-of-the-art approaches on IEMOCAP dataset with text transcripts and speech signal.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Junyi He (6 papers)
  2. Meimei Wu (3 papers)
  3. Meng Li (244 papers)
  4. Xiaobo Zhu (54 papers)
  5. Feng Ye (108 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.