Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Boosting Diffusion Model for Spectrogram Up-sampling in Text-to-speech: An Empirical Study (2406.04633v1)

Published 7 Jun 2024 in eess.AS

Abstract: Scaling text-to-speech (TTS) with autoregressive LLM (LM) to large-scale datasets by quantizing waveform into discrete speech tokens is making great progress to capture the diversity and expressiveness in human speech, but the speech reconstruction quality from discrete speech token is far from satisfaction depending on the compressed speech token compression ratio. Generative diffusion models trained with score-matching loss and continuous normalized flow trained with flow-matching loss have become prominent in generation of images as well as speech. LM based TTS systems usually quantize speech into discrete tokens and generate these tokens autoregressively, and finally use a diffusion model to up sample coarse-grained speech tokens into fine-grained codec features or mel-spectrograms before reconstructing into waveforms with vocoder, which has a high latency and is not realistic for real time speech applications. In this paper, we systematically investigate varied diffusion models for up sampling stage, which is the main bottleneck for streaming synthesis of LM and diffusion-based architecture, we present the model architecture, objective and subjective metrics to show quality and efficiency improvement.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Chong Zhang (137 papers)
  2. Yanqing Liu (48 papers)
  3. Yang Zheng (124 papers)
  4. Sheng Zhao (75 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.