Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Semi-Supervised Training for Improving Data Efficiency in End-to-End Speech Synthesis (1808.10128v1)

Published 30 Aug 2018 in cs.CL, cs.LG, cs.SD, and eess.AS

Abstract: Although end-to-end text-to-speech (TTS) models such as Tacotron have shown excellent results, they typically require a sizable set of high-quality <text, audio> pairs for training, which are expensive to collect. In this paper, we propose a semi-supervised training framework to improve the data efficiency of Tacotron. The idea is to allow Tacotron to utilize textual and acoustic knowledge contained in large, publicly-available text and speech corpora. Importantly, these external data are unpaired and potentially noisy. Specifically, first we embed each word in the input text into word vectors and condition the Tacotron encoder on them. We then use an unpaired speech corpus to pre-train the Tacotron decoder in the acoustic domain. Finally, we fine-tune the model using available paired data. We demonstrate that the proposed framework enables Tacotron to generate intelligible speech using less than half an hour of paired training data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yu-An Chung (33 papers)
  2. Yuxuan Wang (239 papers)
  3. Wei-Ning Hsu (76 papers)
  4. Yu Zhang (1400 papers)
  5. RJ Skerry-Ryan (21 papers)
Citations (111)

Summary

We haven't generated a summary for this paper yet.