Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Multimodal Approach towards Emotion Recognition of Music using Audio and Lyrical Content (1811.05760v1)

Published 10 Oct 2018 in eess.AS, cs.CL, cs.CV, cs.MM, and cs.SD

Abstract: We propose MoodNet - A Deep Convolutional Neural Network based architecture to effectively predict the emotion associated with a piece of music given its audio and lyrical content.We evaluate different architectures consisting of varying number of two-dimensional convolutional and subsampling layers,followed by dense layers.We use Mel-Spectrograms to represent the audio content and word embeddings-specifically 100 dimensional word vectors, to represent the textual content represented by the lyrics.We feed input data from both modalities to our MoodNet architecture.The output from both the modalities are then fused as a fully connected layer and softmax classfier is used to predict the category of emotion.Using F1-score as our metric,our results show excellent performance of MoodNet over the two datasets we experimented on-The MIREX Multimodal dataset and the Million Song Dataset.Our experiments reflect the hypothesis that more complex models perform better with more training data.We also observe that lyrics outperform audio as a better expressed modality and conclude that combining and using features from multiple modalities for prediction tasks result in superior performance in comparison to using a single modality as input.

Citations (9)

Summary

We haven't generated a summary for this paper yet.