Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Emotion Recognition from Multiple Modalities: Fundamentals and Methodologies (2108.10152v1)

Published 18 Aug 2021 in eess.SP, cs.AI, cs.LG, and cs.MM

Abstract: Humans are emotional creatures. Multiple modalities are often involved when we express emotions, whether we do so explicitly (e.g., facial expression, speech) or implicitly (e.g., text, image). Enabling machines to have emotional intelligence, i.e., recognizing, interpreting, processing, and simulating emotions, is becoming increasingly important. In this tutorial, we discuss several key aspects of multi-modal emotion recognition (MER). We begin with a brief introduction on widely used emotion representation models and affective modalities. We then summarize existing emotion annotation strategies and corresponding computational tasks, followed by the description of main challenges in MER. Furthermore, we present some representative approaches on representation learning of each affective modality, feature fusion of different affective modalities, classifier optimization for MER, and domain adaptation for MER. Finally, we outline several real-world applications and discuss some future directions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sicheng Zhao (53 papers)
  2. Guoli Jia (7 papers)
  3. Jufeng Yang (21 papers)
  4. Guiguang Ding (79 papers)
  5. Kurt Keutzer (200 papers)
Citations (93)

Summary

We haven't generated a summary for this paper yet.