Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Speech Emotion Recognition with Co-Attention based Multi-level Acoustic Information (2203.15326v1)

Published 29 Mar 2022 in cs.SD, cs.AI, and eess.AS

Abstract: Speech Emotion Recognition (SER) aims to help the machine to understand human's subjective emotion from only audio information. However, extracting and utilizing comprehensive in-depth audio information is still a challenging task. In this paper, we propose an end-to-end speech emotion recognition system using multi-level acoustic information with a newly designed co-attention module. We firstly extract multi-level acoustic information, including MFCC, spectrogram, and the embedded high-level acoustic information with CNN, BiLSTM and wav2vec2, respectively. Then these extracted features are treated as multimodal inputs and fused by the proposed co-attention mechanism. Experiments are carried on the IEMOCAP dataset, and our model achieves competitive performance with two different speaker-independent cross-validation strategies. Our code is available on GitHub.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Heqing Zou (15 papers)
  2. Yuke Si (1 paper)
  3. Chen Chen (753 papers)
  4. Deepu Rajan (14 papers)
  5. Eng Siong Chng (112 papers)
Citations (97)

Summary

We haven't generated a summary for this paper yet.