Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Investigation of F0 conditioning and Fully Convolutional Networks in Variational Autoencoder based Voice Conversion (1905.00615v2)

Published 2 May 2019 in eess.AS and cs.SD

Abstract: In this work, we investigate the effectiveness of two techniques for improving variational autoencoder (VAE) based voice conversion (VC). First, we reconsider the relationship between vocoder features extracted using the high quality vocoders adopted in conventional VC systems, and hypothesize that the spectral features are in fact F0 dependent. Such hypothesis implies that during the conversion phase, the latent codes and the converted features in VAE based VC are in fact source F0 dependent. To this end, we propose to utilize the F0 as an additional input of the decoder. The model can learn to disentangle the latent code from the F0 and thus generates converted F0 dependent converted features. Second, to better capture temporal dependencies of the spectral features and the F0 pattern, we replace the frame wise conversion structure in the original VAE based VC framework with a fully convolutional network structure. Our experiments demonstrate that the degree of disentanglement as well as the naturalness of the converted speech are indeed improved.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Wen-Chin Huang (53 papers)
  2. Yi-Chiao Wu (42 papers)
  3. Chen-Chou Lo (7 papers)
  4. Patrick Lumban Tobing (20 papers)
  5. Tomoki Hayashi (42 papers)
  6. Kazuhiro Kobayashi (19 papers)
  7. Tomoki Toda (106 papers)
  8. Yu Tsao (200 papers)
  9. Hsin-Min Wang (97 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.