Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Semi-supervised Bayesian Deep Multi-modal Emotion Recognition (1704.07548v1)

Published 25 Apr 2017 in cs.AI, cs.LG, and stat.ML

Abstract: In emotion recognition, it is difficult to recognize human's emotional states using just a single modality. Besides, the annotation of physiological emotional data is particularly expensive. These two aspects make the building of effective emotion recognition model challenging. In this paper, we first build a multi-view deep generative model to simulate the generative process of multi-modality emotional data. By imposing a mixture of Gaussians assumption on the posterior approximation of the latent variables, our model can learn the shared deep representation from multiple modalities. To solve the labeled-data-scarcity problem, we further extend our multi-view model to semi-supervised learning scenario by casting the semi-supervised classification problem as a specialized missing data imputation task. Our semi-supervised multi-view deep generative framework can leverage both labeled and unlabeled data from multiple modalities, where the weight factor for each modality can be learned automatically. Compared with previous emotion recognition methods, our method is more robust and flexible. The experiments conducted on two real multi-modal emotion datasets have demonstrated the superiority of our framework over a number of competitors.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Changde Du (25 papers)
  2. Changying Du (6 papers)
  3. Jinpeng Li (67 papers)
  4. Huiguang He (26 papers)
  5. Bao-Liang Lu (26 papers)
  6. Wei-Long Zheng (14 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.