Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-lingual Knowledge Distillation via Flow-based Voice Conversion for Robust Polyglot Text-To-Speech (2309.08255v1)

Published 15 Sep 2023 in eess.AS, cs.CL, cs.LG, and cs.SD

Abstract: In this work, we introduce a framework for cross-lingual speech synthesis, which involves an upstream Voice Conversion (VC) model and a downstream Text-To-Speech (TTS) model. The proposed framework consists of 4 stages. In the first two stages, we use a VC model to convert utterances in the target locale to the voice of the target speaker. In the third stage, the converted data is combined with the linguistic features and durations from recordings in the target language, which are then used to train a single-speaker acoustic model. Finally, the last stage entails the training of a locale-independent vocoder. Our evaluations show that the proposed paradigm outperforms state-of-the-art approaches which are based on training a large multilingual TTS model. In addition, our experiments demonstrate the robustness of our approach with different model architectures, languages, speakers and amounts of data. Moreover, our solution is especially beneficial in low-resource settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Dariusz Piotrowski (1 paper)
  2. Renard Korzeniowski (3 papers)
  3. Alessio Falai (3 papers)
  4. Sebastian Cygert (18 papers)
  5. Kamil Pokora (8 papers)
  6. Georgi Tinchev (10 papers)
  7. Ziyao Zhang (16 papers)
  8. Kayoko Yanagisawa (8 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.