Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generating diverse and natural text-to-speech samples using a quantized fine-grained VAE and auto-regressive prosody prior (2002.03788v1)

Published 6 Feb 2020 in eess.AS, cs.LG, cs.SD, and stat.ML

Abstract: Recent neural text-to-speech (TTS) models with fine-grained latent features enable precise control of the prosody of synthesized speech. Such models typically incorporate a fine-grained variational autoencoder (VAE) structure, extracting latent features at each input token (e.g., phonemes). However, generating samples with the standard VAE prior often results in unnatural and discontinuous speech, with dramatic prosodic variation between tokens. This paper proposes a sequential prior in a discrete latent space which can generate more naturally sounding samples. This is accomplished by discretizing the latent features using vector quantization (VQ), and separately training an autoregressive (AR) prior model over the result. We evaluate the approach using listening tests, objective metrics of automatic speech recognition (ASR) performance, and measurements of prosody attributes. Experimental results show that the proposed model significantly improves the naturalness in random sample generation. Furthermore, initial experiments demonstrate that randomly sampling from the proposed model can be used as data augmentation to improve the ASR performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Guangzhi Sun (51 papers)
  2. Yu Zhang (1400 papers)
  3. Ron J. Weiss (30 papers)
  4. Yuan Cao (201 papers)
  5. Heiga Zen (36 papers)
  6. Andrew Rosenberg (32 papers)
  7. Bhuvana Ramabhadran (47 papers)
  8. Yonghui Wu (115 papers)
Citations (89)

Summary

We haven't generated a summary for this paper yet.