Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Automatic tagging using deep convolutional neural networks (1606.00298v1)

Published 1 Jun 2016 in cs.SD and cs.LG

Abstract: We present a content-based automatic music tagging algorithm using fully convolutional neural networks (FCNs). We evaluate different architectures consisting of 2D convolutional layers and subsampling layers only. In the experiments, we measure the AUC-ROC scores of the architectures with different complexities and input types using the MagnaTagATune dataset, where a 4-layer architecture shows state-of-the-art performance with mel-spectrogram input. Furthermore, we evaluated the performances of the architectures with varying the number of layers on a larger dataset (Million Song Dataset), and found that deeper models outperformed the 4-layer architecture. The experiments show that mel-spectrogram is an effective time-frequency representation for automatic tagging and that more complex models benefit from more training data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Keunwoo Choi (42 papers)
  2. George Fazekas (28 papers)
  3. Mark Sandler (66 papers)
Citations (325)

Summary

An Overview of "Automatic Tagging Using Deep Convolutional Neural Networks"

This paper presents a systematic investigation into the utilization of Fully Convolutional Networks (FCNs) for automatic music tagging, detailing their effectiveness compared to previous conventional and deep learning architectures. The authors, Choi, Fazekas, and Sandler of Queen Mary University of London, propose several architectures of Fully Convolutional Networks, primarily composed of convolutional and subsampling layers, to enhance the accuracy of predicting descriptive music tags from audio signals.

Research Framework and Methodology

The paper tackles automatic music tagging, a multi-label classification task that involves predicting multiple descriptive tags—ranging from genres and instruments to moods and eras—based on audio signal inputs like mel-spectrograms. FCNs comprise the backbone of this research, emphasizing 2D convolutional architectures as a means to adapt the spatial learning prowess of CNNs to the time-frequency domain representations of audio. By only involving convolutional layers, FCNs address the issues of parameter-heavy fully-connected layers, thereby reducing the risk of overfitting. The proposed architectures, FCN-3 to FCN-7, vary in depth, allowing the exploration of complexity and layer count impacts on performance.

The paper uses two distinct datasets to evaluate and compare the architectures: the MagnaTagATune dataset and the Million Song Dataset (MSD). The MagnaTagATune serves as a benchmark, given its prevalent use in music tagging literature, while the larger MSD explores the scalable capabilities of deeper models given sufficient training data.

Key Findings and Contributions

The experimental results are insightful. Using the MagnaTagATune dataset, the 4-layer FCN (FCN-4) architecture achieved competitive accuracy with an AUC score of 0.894, outperforming previous models utilizing handcrafted or less sophisticated deep learning features. Mel-spectrogram inputs, known for their alignment with human auditory processing, surpassed other time-frequency inputs like MFCC and STFT, highlighting their suitability in automatic music tagging.

Additionally, when using the Million Song Dataset, deeper architecture (FCN-5, FCN-6, and FCN-7) demonstrated improved performance (with FCN-5 achieving an AUC score of 0.848) and underscored a crucial insight: deeper models benefit substantially from larger datasets, which afford ample training opportunities.

Practical and Theoretical Implications

The robust results confirm the paper's critical contribution to the field of music information retrieval (MIR), primarily in music recommendation systems aimed at classifying or suggesting music based on tags. Their work underscores the significant potential of FCNs in dealing with high-dimensional auditory data without the complexity and computational demands associated with fully-connected layers.

Theoretically, this research advances the understanding of convolutional architectures' utility in non-traditional domains like audio processing, exemplifying how adaptable neural network techniques traditionally used in vision can be effectively transferred and refined within the auditory space.

Future Prospects

The authors suggest that further work on FCNs could explore variable input lengths and adaptive training approaches for more generalized automatic tagging systems. Additionally, optimizing deep network designs for computational efficiency remains pivotal, particularly for applications requiring real-time processing or limited computing resources.

In conclusion, this paper provides a definitive analysis of FCNs for music tagging, combining pioneering architectural approaches with empirical data-driven validations. It sets a benchmark for future explorations of neural networks within MIR, potentially encouraging further cross-disciplinary innovations and applications.