Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fully-hierarchical fine-grained prosody modeling for interpretable speech synthesis (2002.03785v1)

Published 6 Feb 2020 in eess.AS, cs.LG, cs.SD, and stat.ML

Abstract: This paper proposes a hierarchical, fine-grained and interpretable latent variable model for prosody based on the Tacotron 2 text-to-speech model. It achieves multi-resolution modeling of prosody by conditioning finer level representations on coarser level ones. Additionally, it imposes hierarchical conditioning across all latent dimensions using a conditional variational auto-encoder (VAE) with an auto-regressive structure. Evaluation of reconstruction performance illustrates that the new structure does not degrade the model while allowing better interpretability. Interpretations of prosody attributes are provided together with the comparison between word-level and phone-level prosody representations. Moreover, both qualitative and quantitative evaluations are used to demonstrate the improvement in the disentanglement of the latent dimensions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Guangzhi Sun (51 papers)
  2. Yu Zhang (1400 papers)
  3. Ron J. Weiss (30 papers)
  4. Yuan Cao (201 papers)
  5. Heiga Zen (36 papers)
  6. Yonghui Wu (115 papers)
Citations (126)

Summary

We haven't generated a summary for this paper yet.