Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

InspireMusic: Integrating Super Resolution and Large Language Model for High-Fidelity Long-Form Music Generation (2503.00084v1)

Published 28 Feb 2025 in cs.SD, cs.AI, cs.CL, and eess.AS

Abstract: We introduce InspireMusic, a framework integrated super resolution and LLM for high-fidelity long-form music generation. A unified framework generates high-fidelity music, songs, and audio, which incorporates an autoregressive transformer with a super-resolution flow-matching model. This framework enables the controllable generation of high-fidelity long-form music at a higher sampling rate from both text and audio prompts. Our model differs from previous approaches, as we utilize an audio tokenizer with one codebook that contains richer semantic information, thereby reducing training costs and enhancing efficiency. This combination enables us to achieve high-quality audio generation with long-form coherence of up to $8$ minutes. Then, an autoregressive transformer model based on Qwen 2.5 predicts audio tokens. Next, we employ a super-resolution flow-matching model to generate high-sampling rate audio with fine-grained details learned from an acoustic codec model. Comprehensive experiments show that the InspireMusic-1.5B-Long model has a comparable performance to recent top-tier open-source systems, including MusicGen and Stable Audio 2.0, on subjective and objective evaluations. The code and pre-trained models are released at https://github.com/FunAudioLLM/InspireMusic.

This paper introduces InspireMusic, a novel framework for high-fidelity long-form music generation, which combines super-resolution techniques with a LLM. The system is composed of three primary components: audio tokenizers, an autoregressive transformer, and a super-resolution flow-matching model. The framework is designed to generate controllable, high-fidelity audio with long-form coherence, achieving up to 8 minutes of continuous music.

The paper highlights the limitations of existing music generation models, noting that while some excel in capturing long-form musical structures, they often struggle with audio fidelity, while others offer high-quality audio but may lack global coherence. InspireMusic aims to bridge this gap by integrating these different generative paradigms.

Key elements of the InspireMusic framework include:

  • Audio Tokenization: The framework employs WavTokenizer, which compresses 24kHz audio into discrete tokens at a 75Hz token rate, using only one codebook at 0.9 kbps bandwidth. WavTokenizer captures global musical structure and facilitates efficient training and inference for the autoregressive model. The WavTokenizer uses a VQ approach, broader contextual windows, improved attention networks, and a multi-scale discriminator along with an inverse FFT (Fast Fourier Transform) in the decoder.
  • Autoregressive Transformer: The core of InspireMusic is an AR transformer, utilizing the Qwen 2.5 model series as its backbone LLM. The model predicts the next audio token in a sequence, conditioned on preceding tokens, to generate long sequences with coherence. The transformer is trained using a next-token prediction objective, conditioned on various inputs such as text descriptions (sts_t), timestamps including time start (tsts) and time end (tete), music structures (ss), label (ll), and audio tokens (sas_a), represented as S={st1,st2,,stm,sts,ste,ss,sl,sa1,sa2,,san}S = \{s_{t}^1, s_{t}^2, \cdots, s_{t}^m, s_{ts}, s_{te}, s_{s}, s_{l}, s_{a}^1, s_{a}^2, \cdots, s_{a}^n\}, where T=m+n+4T=m+n+4. The input dimension sizes of 0.5B and 1.5B models are $896$ and $1536$, respectively.
  • Super-Resolution Flow-Matching: A SRFM model enhances low-resolution coarse audio tokens to high-resolution fine-grained audio outputs by learning optimal transformation paths between distributions. Unlike iterative methods, SRFM employs flow matching techniques to directly model the mapping from coarse audio tokens from low sampling rate audio waveforms to fine-grained high-resolution latent audio features extracted from audio with a higher sampling rate (i.e., $48kHz$) via a $150Hz$ Hifi-Codec model.

    For the $150Hz$ Hifi-Codec model, given a single channel audio sequence XX with the duration of DD as the inputs, an Encoder network EE takes the raw audio inputs and transforms them into hidden features HH, a group residual quantization layer QQ with the codebook size of $4$ and each codebook dimension of CC, and a decoder GG that reconstruct the audio signal from the compressed latent features, where in this paper H=1024H=1024 and C=1024C=1024.

  • Model Variants: The paper details several variants of InspireMusic, including InspireMusic-0.5B, InspireMusic-1.5B, and InspireMusic-1.5B-Long, each tailored for different performance levels and composition lengths.
  • Training Procedure: The training process involves multiple stages, including training audio tokenizers, the autoregressive transformer model, and the flow-matching model. The autoregressive transformer model undergoes pre-training on large-scale audio-text paired datasets, followed by fine-tuning on curated datasets with human-labeled text captions. The SRFM model trains using paired low- and high-resolution audio tokens to learn the upscaling transformation.

The models were evaluated using both objective and subjective metrics. Objective metrics included FD, KL divergence, and the CLAP (Contrastive Language-Audio Pre-training) score. Subjective evaluations were based on the Comparative Mean Opinion Score (CMOS) from professional music raters, considering audio-text alignment, audio quality, musicality, and overall performance.

The paper includes results from text-to-music and music continuation tasks, demonstrating that the InspireMusic-1.5B-Long model outperforms MusicGen and Stable Audio 2.0 across several evaluation dimensions. For example, in subjective evaluations for the text-to-music task, InspireMusic-1.5B-Long achieves a CMOS score that is 7% higher relative to Stable Audio 2.0 and shows a 14% improvement over InspireMusic-0.5B. Additionally, InspireMusic-1.5B-Long surpasses InspireMusic-0.5B by 6.5% in CMOS score for the same task.

Ablation studies were conducted to assess the contribution of each component, revealing that removing the SRFM model results in a notable drop in audio fidelity. Evaluations also explored the impact of different Classifier-Free Guidance (CFG) values and audio generation lengths on model performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (15)
  1. Chong Zhang (137 papers)
  2. Yukun Ma (33 papers)
  3. Qian Chen (264 papers)
  4. Wen Wang (144 papers)
  5. Shengkui Zhao (21 papers)
  6. Zexu Pan (36 papers)
  7. Hao Wang (1119 papers)
  8. Chongjia Ni (18 papers)
  9. Trung Hieu Nguyen (12 papers)
  10. Kun Zhou (217 papers)
  11. Yidi Jiang (18 papers)
  12. Chaohong Tan (3 papers)
  13. Zhifu Gao (28 papers)
  14. Zhihao Du (30 papers)
  15. Bin Ma (78 papers)
Github Logo Streamline Icon: https://streamlinehq.com