Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Training Text-To-Speech Systems From Synthetic Data: A Practical Approach For Accent Transfer Tasks (2208.13183v1)

Published 28 Aug 2022 in cs.SD and eess.AS

Abstract: Transfer tasks in text-to-speech (TTS) synthesis - where one or more aspects of the speech of one set of speakers is transferred to another set of speakers that do not feature these aspects originally - remains a challenging task. One of the challenges is that models that have high-quality transfer capabilities can have issues in stability, making them impractical for user-facing critical tasks. This paper demonstrates that transfer can be obtained by training a robust TTS system on data generated by a less robust TTS system designed for a high-quality transfer task; in particular, a CHiVE-BERT monolingual TTS system is trained on the output of a Tacotron model designed for accent transfer. While some quality loss is inevitable with this approach, experimental results show that the models trained on synthetic data this way can produce high quality audio displaying accent transfer, while preserving speaker characteristics such as speaking style.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Lev Finkelstein (1 paper)
  2. Heiga Zen (36 papers)
  3. Norman Casagrande (8 papers)
  4. Chun-an Chan (4 papers)
  5. Ye Jia (33 papers)
  6. Tom Kenter (9 papers)
  7. Alexey Petelin (1 paper)
  8. Jonathan Shen (13 papers)
  9. Vincent Wan (2 papers)
  10. Yu Zhang (1400 papers)
  11. Yonghui Wu (115 papers)
  12. Rob Clark (10 papers)
Citations (8)