Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
140 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training (2306.00107v5)

Published 31 May 2023 in cs.SD, cs.AI, cs.CL, cs.LG, and eess.AS

Abstract: Self-supervised learning (SSL) has recently emerged as a promising paradigm for training generalisable models on large-scale data in the fields of vision, text, and speech. Although SSL has been proven effective in speech and audio, its application to music audio has yet to be thoroughly explored. This is partially due to the distinctive challenges associated with modelling musical knowledge, particularly tonal and pitched characteristics of music. To address this research gap, we propose an acoustic Music undERstanding model with large-scale self-supervised Training (MERT), which incorporates teacher models to provide pseudo labels in the masked LLMling (MLM) style acoustic pre-training. In our exploration, we identified an effective combination of teacher models, which outperforms conventional speech and audio approaches in terms of performance. This combination includes an acoustic teacher based on Residual Vector Quantisation - Variational AutoEncoder (RVQ-VAE) and a musical teacher based on the Constant-Q Transform (CQT). Furthermore, we explore a wide range of settings to overcome the instability in acoustic LLM pre-training, which allows our designed paradigm to scale from 95M to 330M parameters. Experimental results indicate that our model can generalise and perform well on 14 music understanding tasks and attain state-of-the-art (SOTA) overall scores.

Citations (75)

Summary

  • The paper presents MERT, a novel model that leverages self-supervised learning to achieve state-of-the-art results on multiple music information retrieval tasks.
  • It uses a multi-teacher framework combining RVQ-VAE and Constant-Q Transform to effectively capture both acoustic nuances and musical structure.
  • The model demonstrates remarkable efficiency by operating at only 6.6% of Jukebox's parameter size, making it scalable and accessible for diverse MIR applications.

Overview of MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training

The paper introduces MERT, an innovative approach to acoustic music understanding using large-scale self-supervised training (SSL). Tackling the challenges of capturing the tonal and pitched characteristics of music, MERT employs a unique combination of teacher models, leading to substantial gains over existing speech and audio approaches on various music information retrieval (MIR) tasks.

Methodology

The authors have developed MERT with the goal to generalize across numerous MIR tasks without task-specific models. The paradigm integrates pre-trained models in a multi-task format using the Residual Vector Quantisation - Variational AutoEncoder (RVQ-VAE) and Constant-Q Transform (CQT) as teacher models. This approach facilitates robust acoustic and musical representation learning via masked LLMing (MLM).

Key components include:

  • Acoustic Teacher: Utilizes RVQ-VAE for discretized acoustic-level summaries of music. This method is preferred over traditional features, such as MFCCs, due to their limited ability to capture the complexities of music audio.
  • Musical Teacher: Adopts CQT to capture pitch and harmonic bias, an area where speech processing models like HuBERT fall short.
  • Instability Solutions: Innovations like attention relaxation and pre-layer normalization enhance training stability, critical when scaling models up to 330M parameters.

Experimental Results

MERT's performance is assessed across 14 MIR tasks. It attains state-of-the-art (SOTA) results in tasks that emphasize local-level music information, such as beat tracking and pitch classification, while remaining competitive in global information tasks like music tagging and genre classification.

Numerical Results & Comparisons

  • MERT-330M achieves remarkable scores across MIR tasks, matching or exceeding the SOTA results previously achieved by a combination of 10 different models.
  • The architecture's lightweight nature (only 6.6% of Jukebox's parameter size) demonstrates its computational efficiency, making it suitable for large-scale deployments.

Implications and Future Directions

By reducing reliance on large, cumbersome models, MERT stands as an accessible, computationally affordable model poised for broad applicability within both academic and industry MIR applications. Its open-source availability ensures that researchers can extend this work further.

Future Developments could address:

  • Enhancements in the global pattern recognition through extended sequence-length training to encapsulate longer musical contexts.
  • Exploitation of deeper and wider teacher model embeddings to enhance the fidelity and generalizability further.

Concluding Remarks

The authors succeed in advancing the music understanding field by refining the pre-training of acoustic models using SSL. They offer a pertinent alternative to the current SOTA methods by harnessing robust teacher model combinations to yield substantial improvements across multiple tasks. MERT opens impactful avenues for further research, promising continued development in machine understanding of music.

Youtube Logo Streamline Icon: https://streamlinehq.com