Papers
Topics
Authors
Recent
Search
2000 character limit reached

Objective Human Affective Vocal Expression Detection and Automatic Classification with Stochastic Models and Learning Systems

Published 4 Oct 2019 in eess.AS | (1910.01967v1)

Abstract: This paper presents a widespread analysis of affective vocal expression classification systems. In this study, state-of-the-art acoustic features are compared to two novel affective vocal prints for the detection of emotional states: the Hilbert-Huang-Hurst Coefficients (HHHC) and the vector of index of non-stationarity (INS). HHHC is here proposed as a nonlinear vocal source feature vector that represents the affective states according to their effects on the speech production mechanism. Emotional states are highlighted by the empirical mode decomposition (EMD) based method, which exploits the non-stationarity of the affective acoustic variations. Hurst coefficients (closely related to the excitation source) are then estimated from the decomposition process to compose the feature vector. Additionally, the INS vector is introduced as dynamic information to the HHHC feature. The proposed features are evaluated in speech emotion classification experiments with three databases in German and English languages. Three state-of-the-art acoustic features are adopted as baseline. The $\alpha$-integrated Gaussian model ($\alpha$-GMM) is also introduced for the emotion representation and classification. Its performance is compared to competing stochastic and machine learning classifiers. Results demonstrate that HHHC leads to significant classification improvement when compared to the baseline acoustic features. Moreover, results also show that $\alpha$-GMM outperforms the competing classification methods. Finally, HHHC and INS are also evaluated as complementary features for the GeMAPS and eGeMAPS feature sets

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.