Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pretraining Techniques for Sequence-to-Sequence Voice Conversion (2008.03088v1)

Published 7 Aug 2020 in eess.AS, cs.CL, and cs.SD

Abstract: Sequence-to-sequence (seq2seq) voice conversion (VC) models are attractive owing to their ability to convert prosody. Nonetheless, without sufficient data, seq2seq VC models can suffer from unstable training and mispronunciation problems in the converted speech, thus far from practical. To tackle these shortcomings, we propose to transfer knowledge from other speech processing tasks where large-scale corpora are easily available, typically text-to-speech (TTS) and automatic speech recognition (ASR). We argue that VC models initialized with such pretrained ASR or TTS model parameters can generate effective hidden representations for high-fidelity, highly intelligible converted speech. We apply such techniques to recurrent neural network (RNN)-based and Transformer based models, and through systematical experiments, we demonstrate the effectiveness of the pretraining scheme and the superiority of Transformer based models over RNN-based models in terms of intelligibility, naturalness, and similarity.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Wen-Chin Huang (53 papers)
  2. Tomoki Hayashi (42 papers)
  3. Yi-Chiao Wu (42 papers)
  4. Hirokazu Kameoka (42 papers)
  5. Tomoki Toda (106 papers)
Citations (37)

Summary

We haven't generated a summary for this paper yet.