Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Feature-level and Model-level Audiovisual Fusion for Emotion Recognition in the Wild (1906.02728v1)

Published 6 Jun 2019 in cs.CV

Abstract: Emotion recognition plays an important role in human-computer interaction (HCI) and has been extensively studied for decades. Although tremendous improvements have been achieved for posed expressions, recognizing human emotions in "close-to-real-world" environments remains a challenge. In this paper, we proposed two strategies to fuse information extracted from different modalities, i.e., audio and visual. Specifically, we utilized LBP-TOP, an ensemble of CNNs, and a bi-directional LSTM (BLSTM) to extract features from the visual channel, and the OpenSmile toolkit to extract features from the audio channel. Two kinds of fusion methods, i,e., feature-level fusion and model-level fusion, were developed to utilize the information extracted from the two channels. Experimental results on the EmotiW2018 AFEW dataset have shown that the proposed fusion methods outperform the baseline methods significantly and achieve better or at least comparable performance compared with the state-of-the-art methods, where the model-level fusion performs better when one of the channels totally fails.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Jie Cai (44 papers)
  2. Zibo Meng (27 papers)
  3. Ahmed Shehab Khan (5 papers)
  4. Zhiyuan Li (304 papers)
  5. James O'Reilly (5 papers)
  6. Shizhong Han (26 papers)
  7. Ping Liu (93 papers)
  8. Min Chen (200 papers)
  9. Yan Tong (15 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.