2000 character limit reached
Multi-Modality in Music: Predicting Emotion in Music from High-Level Audio Features and Lyrics (2302.13321v1)
Published 26 Feb 2023 in cs.SD, cs.CL, cs.MM, and eess.AS
Abstract: This paper aims to test whether a multi-modal approach for music emotion recognition (MER) performs better than a uni-modal one on high-level song features and lyrics. We use 11 song features retrieved from the Spotify API, combined lyrics features including sentiment, TF-IDF, and Anew to predict valence and arousal (Russell, 1980) scores on the Deezer Mood Detection Dataset (DMDD) (Delbouys et al., 2018) with 4 different regression models. We find that out of the 11 high-level song features, mainly 5 contribute to the performance, multi-modal features do better than audio alone when predicting valence. We made our code publically available.