Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Contrastive Unsupervised Learning for Speech Emotion Recognition (2102.06357v1)

Published 12 Feb 2021 in cs.SD, cs.LG, and eess.AS

Abstract: Speech emotion recognition (SER) is a key technology to enable more natural human-machine communication. However, SER has long suffered from a lack of public large-scale labeled datasets. To circumvent this problem, we investigate how unsupervised representation learning on unlabeled datasets can benefit SER. We show that the contrastive predictive coding (CPC) method can learn salient representations from unlabeled datasets, which improves emotion recognition performance. In our experiments, this method achieved state-of-the-art concordance correlation coefficient (CCC) performance for all emotion primitives (activation, valence, and dominance) on IEMOCAP. Additionally, on the MSP- Podcast dataset, our method obtained considerable performance improvements compared to baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Mao Li (15 papers)
  2. Bo Yang (427 papers)
  3. Joshua Levy (3 papers)
  4. Andreas Stolcke (57 papers)
  5. Viktor Rozgic (11 papers)
  6. Spyros Matsoukas (23 papers)
  7. Constantinos Papayiannis (6 papers)
  8. Daniel Bone (1 paper)
  9. Chao Wang (555 papers)
Citations (43)

Summary

We haven't generated a summary for this paper yet.