Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism (2105.02446v6)

Published 6 May 2021 in eess.AS, cs.LG, and cs.SD

Abstract: Singing voice synthesis (SVS) systems are built to synthesize high-quality and expressive singing voice, in which the acoustic model generates the acoustic features (e.g., mel-spectrogram) given a music score. Previous singing acoustic models adopt a simple loss (e.g., L1 and L2) or generative adversarial network (GAN) to reconstruct the acoustic features, while they suffer from over-smoothing and unstable training issues respectively, which hinder the naturalness of synthesized singing. In this work, we propose DiffSinger, an acoustic model for SVS based on the diffusion probabilistic model. DiffSinger is a parameterized Markov chain that iteratively converts the noise into mel-spectrogram conditioned on the music score. By implicitly optimizing variational bound, DiffSinger can be stably trained and generate realistic outputs. To further improve the voice quality and speed up inference, we introduce a shallow diffusion mechanism to make better use of the prior knowledge learned by the simple loss. Specifically, DiffSinger starts generation at a shallow step smaller than the total number of diffusion steps, according to the intersection of the diffusion trajectories of the ground-truth mel-spectrogram and the one predicted by a simple mel-spectrogram decoder. Besides, we propose boundary prediction methods to locate the intersection and determine the shallow step adaptively. The evaluations conducted on a Chinese singing dataset demonstrate that DiffSinger outperforms state-of-the-art SVS work. Extensional experiments also prove the generalization of our methods on text-to-speech task (DiffSpeech). Audio samples: https://diffsinger.github.io. Codes: https://github.com/MoonInTheRiver/DiffSinger. The old title of this work: "Diffsinger: Diffusion acoustic model for singing voice synthesis".

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jinglin Liu (38 papers)
  2. Chengxi Li (38 papers)
  3. Yi Ren (215 papers)
  4. Feiyang Chen (18 papers)
  5. Zhou Zhao (219 papers)
Citations (235)

Summary

DiffSinger: Advancements in Singing Voice Synthesis through Diffusion Models

The paper "DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism" introduces an advanced model for Singing Voice Synthesis (SVS) that utilizes the diffusion probabilistic model. DiffSinger is engineered to enhance the quality and realism of synthesized singing voices by addressing the limitations of traditional methods, specifically over-smoothing and instability issues. The authors implement a novel shallow diffusion mechanism that capitalizes on prior knowledge from simpler models, demonstrating significant improvements over existing SVS systems.

Overview of DiffSinger

DiffSinger is structured around the diffusion probabilistic model, which has gained prominence for stable training and realistic generation capabilities. The model operates through a parameterized Markov chain that transitions noise into mel-spectrograms conditioned on a music score. Training is achieved via the optimization of the variational bound, enabling efficient and stable learning without adversarial feedback.

A unique aspect of DiffSinger is the shallow diffusion mechanism. This mechanism begins the reverse process at an earlier stage (a shallow step) in the diffusion process rather than from Gaussian white noise, as traditional models do. The shallow start is determined by the intersection of diffusion paths between the ground truth and the predictions of an auxiliary mel-spectrogram decoder. This approach reduces the computational burden during inference and improves the naturalness of the synthesized voice.

Main Contributions

This work presents several notable contributions:

  1. DiffSinger Model: Introducing a diffusion probabilistic model to SVS, tackling the ubiquitous over-smoothing and instability present in previous models.
  2. Shallow Diffusion Mechanism: A methodological advancement that enhances voice quality and reduces inference time by leveraging prior knowledge from auxiliary models. It demonstrates a substantial 45.1% reduction in inference time.
  3. Boundary Prediction: The development of a boundary prediction network to adaptively locate the optimal intersection step, further refining the shallow diffusion process.
  4. Generalization to TTS: The extension of methodologies to text-to-speech (TTS), yielding notable performance gains over existing TTS models, affirming the approach's versatility.

Experimental Validation

The authors validate their model on a Chinese singing dataset and perform extensional experiments on a TTS task using the LJSpeech dataset. The evaluations indicate that DiffSinger not only surpasses state-of-the-art SVS systems but also demonstrates improved generalization in TTS applications. Quantitatively, DiffSinger achieves a 0.11 Mean Opinion Score (MOS) improvement over a state-of-the-art SVS acoustic model, manifesting a definitive enhancement in synthesis quality.

Theoretical and Practical Implications

Theoretical implications of this work highlight the efficacy of diffusion probabilistic models in conditional generation tasks beyond image synthesis, opening avenues for their application in audio domains. Practically, DiffSinger's capability to produce high-quality audio with reduced computational overhead is significant for deployment in music production and interactive media.

Speculation on Future Directions

Future work could investigate the integration of more fine-grained pitch and articulation controls to refine synthesized outputs further. Exploration into hybrid models combining elements of diffusion and other generative frameworks could also yield advancements in both fidelity and computational efficiency.

In conclusion, DiffSinger represents a robust and innovative advancement in SVS, showcasing how diffusion models can be tailored to address domain-specific challenges. Its impact on both the scientific understanding and practical applications of synthesis models promises to be considerable.

Youtube Logo Streamline Icon: https://streamlinehq.com