Speech Emotion Recognition using Self-Supervised Features (2202.03896v1)
Abstract: Self-supervised pre-trained features have consistently delivered state-of-art results in the field of NLP; however, their merits in the field of speech emotion recognition (SER) still need further investigation. In this paper we introduce a modular End-to- End (E2E) SER system based on an Upstream + Downstream architecture paradigm, which allows easy use/integration of a large variety of self-supervised features. Several SER experiments for predicting categorical emotion classes from the IEMOCAP dataset are performed. These experiments investigate interactions among fine-tuning of self-supervised feature models, aggregation of frame-level features into utterance-level features and back-end classification networks. The proposed monomodal speechonly based system not only achieves SOTA results, but also brings light to the possibility of powerful and well finetuned self-supervised acoustic features that reach results similar to the results achieved by SOTA multimodal systems using both Speech and Text modalities.
- Edmilson Morais (7 papers)
- Ron Hoory (15 papers)
- Weizhong Zhu (3 papers)
- Itai Gat (30 papers)
- Matheus Damasceno (2 papers)
- Hagai Aronowitz (8 papers)