Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dawn of the transformer era in speech emotion recognition: closing the valence gap (2203.07378v4)

Published 14 Mar 2022 in eess.AS, cs.LG, and cs.SD

Abstract: Recent advances in transformer-based architectures which are pre-trained in self-supervised manner have shown great promise in several machine learning tasks. In the audio domain, such architectures have also been successfully utilised in the field of speech emotion recognition (SER). However, existing works have not evaluated the influence of model size and pre-training data on downstream performance, and have shown limited attention to generalisation, robustness, fairness, and efficiency. The present contribution conducts a thorough analysis of these aspects on several pre-trained variants of wav2vec 2.0 and HuBERT that we fine-tuned on the dimensions arousal, dominance, and valence of MSP-Podcast, while additionally using IEMOCAP and MOSI to test cross-corpus generalisation. To the best of our knowledge, we obtain the top performance for valence prediction without use of explicit linguistic information, with a concordance correlation coefficient (CCC) of .638 on MSP-Podcast. Furthermore, our investigations reveal that transformer-based architectures are more robust to small perturbations compared to a CNN-based baseline and fair with respect to biological sex groups, but not towards individual speakers. Finally, we are the first to show that their extraordinary success on valence is based on implicit linguistic information learnt during fine-tuning of the transformer layers, which explains why they perform on-par with recent multimodal approaches that explicitly utilise textual information. Our findings collectively paint the following picture: transformer-based architectures constitute the new state-of-the-art in SER, but further advances are needed to mitigate remaining robustness and individual speaker issues. To make our findings reproducible, we release the best performing model to the community.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Johannes Wagner (6 papers)
  2. Andreas Triantafyllopoulos (42 papers)
  3. Hagen Wierstorf (8 papers)
  4. Maximilian Schmitt (13 papers)
  5. Felix Burkhardt (11 papers)
  6. Florian Eyben (14 papers)
  7. Björn W. Schuller (153 papers)
Citations (234)

Summary

Analyzing the Transformer Era in Speech Emotion Recognition

The paper "Dawn of the transformer era in speech emotion recognition: closing the valence gap" by Wagner et al. presents an extensive evaluation of transformer-based architectures for Speech Emotion Recognition (SER), particularly focusing on valence prediction. The paper utilizes several pre-trained variants of wav2vec and HuBERT models to analyze aspects such as generalization, robustness, fairness, and efficiency in comparison to conventional Convolutional Neural Networks (CNNs).

The paper identifies and addresses key challenges in real-world SER applications: boosting valence performance, ensuring generalization and robustness, and resolving fairness issues. Of particular interest is the achievement of a Concordance Correlation Coefficient (CCC) of 0.638 for valence prediction without explicit linguistic information using the MSP-Podcast corpus, hinting at transformers' capability to implicitly gather linguistic cues without specific input from Automatic Speech Recognition (ASR) or NLP modules.

Technical Evaluation

The investigation utilizes the arousal-dominance-valence framework for emotional representation, employing datasets such as MSP-Podcast, IEMOCAP, and MOSI for both in-domain and cross-corpus evaluation. The transformers' performance was evaluated across several aspects:

  1. Performance and Fine-Tuning: Transformer models, when fine-tuned, exhibit superior performance in valence prediction compared to conventional CNN models. The paper elucidates that fine-tuning the self-attention layers is critical for capturing linguistic cues for valence, as demonstrated by synthesizing spoken content and observing performance changes.
  2. Layer Significance and Data Utilization: It is shown that while large amounts of training data do not universally translate into better performances, the diversity of training data is crucial. Interestingly, transformer models trained on multilingual datasets seem disadvantaged when tested on English-only tasks compared with their monolingual counterparts.
  3. Robustness and Fairness: Robustness assessments reveal that transformers handle input perturbations well, maintaining a stable performance even with low-quality inputs. Fairness evaluation, primarily concerning gender, shows acceptable levels of bias for arousal and dominance. The speaker-level analysis suggests that model performance can vary, highlighting the importance of ensuring fair treatment across individual speakers.
  4. Generalization and Efficiency: The models demonstrate significant generalization ability across domains, outperforming CNNs for cross-corpus evaluations. Transformer-based models illustrate high data efficiency by maintaining performance levels even with reduced training data, marking them as suitable solutions for varied real-life scenarios.

Implications and Future Directions

The results illuminate pathways for deploying SER systems based on deep transformer architectures in real-world applications. Given that valence has historically been challenging to capture via paralinguistic features alone, the inherent linguistic integration that transformers afford opens new avenues for SER. Additionally, as transformer models exhibit generalization, robustness, and fairness, they alleviate longstanding hindrances for deployed emotion recognition systems.

Further research might focus on enhancing multimodal architectures that integrate explicit textual and paralinguistic cues, exploring biases in profound detail, and investigating underspecification issues. Moreover, understanding cross-linguistic generalization could be pivotal for SER systems aiming at global applications. The paper demonstrates that while transformers mark significant advancement toward comprehensive SER, thoughtful investigation into linguistic dependencies and architectural optimizations will be necessary as the field progresses.