Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A learned conditional prior for the VAE acoustic space of a TTS system (2106.10229v1)

Published 14 Jun 2021 in eess.AS, cs.LG, and cs.SD

Abstract: Many factors influence speech yielding different renditions of a given sentence. Generative models, such as variational autoencoders (VAEs), capture this variability and allow multiple renditions of the same sentence via sampling. The degree of prosodic variability depends heavily on the prior that is used when sampling. In this paper, we propose a novel method to compute an informative prior for the VAE latent space of a neural text-to-speech (TTS) system. By doing so, we aim to sample with more prosodic variability, while gaining controllability over the latent space's structure. By using as prior the posterior distribution of a secondary VAE, which we condition on a speaker vector, we can sample from the primary VAE taking explicitly the conditioning into account and resulting in samples from a specific region of the latent space for each condition (i.e. speaker). A formal preference test demonstrates significant preference of the proposed approach over standard Conditional VAE. We also provide visualisations of the latent space where well-separated condition-specific clusters appear, as well as ablation studies to better understand the behaviour of the system.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Penny Karanasou (11 papers)
  2. Sri Karlapati (13 papers)
  3. Alexis Moinet (22 papers)
  4. Arnaud Joly (14 papers)
  5. Ammar Abbas (12 papers)
  6. Simon Slangen (3 papers)
  7. Jaime Lorenzo Trueba (1 paper)
  8. Thomas Drugman (61 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.