Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Emotion Recognition with Pre-Trained Transformers Using Multimodal Signals (2212.13885v1)

Published 22 Dec 2022 in eess.SP, cs.AI, and cs.LG

Abstract: In this paper, we address the problem of multimodal emotion recognition from multiple physiological signals. We demonstrate that a Transformer-based approach is suitable for this task. In addition, we present how such models may be pretrained in a multimodal scenario to improve emotion recognition performances. We evaluate the benefits of using multimodal inputs and pre-training with our approach on a state-ofthe-art dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Juan Vazquez-Rodriguez (3 papers)
  2. Grégoire Lefebvre (4 papers)
  3. Julien Cumin (7 papers)
  4. James L Crowley (8 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.