Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GANSpeech: Adversarial Training for High-Fidelity Multi-Speaker Speech Synthesis (2106.15153v1)

Published 29 Jun 2021 in eess.AS, cs.CL, cs.LG, and cs.SD

Abstract: Recent advances in neural multi-speaker text-to-speech (TTS) models have enabled the generation of reasonably good speech quality with a single model and made it possible to synthesize the speech of a speaker with limited training data. Fine-tuning to the target speaker data with the multi-speaker model can achieve better quality, however, there still exists a gap compared to the real speech sample and the model depends on the speaker. In this work, we propose GANSpeech, which is a high-fidelity multi-speaker TTS model that adopts the adversarial training method to a non-autoregressive multi-speaker TTS model. In addition, we propose simple but efficient automatic scaling methods for feature matching loss used in adversarial training. In the subjective listening tests, GANSpeech significantly outperformed the baseline multi-speaker FastSpeech and FastSpeech2 models, and showed a better MOS score than the speaker-specific fine-tuned FastSpeech2.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jinhyeok Yang (8 papers)
  2. Jae-Sung Bae (11 papers)
  3. Taejun Bak (4 papers)
  4. Youngik Kim (2 papers)
  5. Hoon-Young Cho (16 papers)
Citations (32)

Summary

We haven't generated a summary for this paper yet.