Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Noise Robust TTS for Low Resource Speakers using Pre-trained Model and Speech Enhancement (2005.12531v2)

Published 26 May 2020 in eess.AS, cs.CL, cs.LG, and cs.SD

Abstract: With the popularity of deep neural network, speech synthesis task has achieved significant improvements based on the end-to-end encoder-decoder framework in the recent days. More and more applications relying on speech synthesis technology have been widely used in our daily life. Robust speech synthesis model depends on high quality and customized data which needs lots of collecting efforts. It is worth investigating how to take advantage of low-quality and low resource voice data which can be easily obtained from the Internet for usage of synthesizing personalized voice. In this paper, the proposed end-to-end speech synthesis model uses both speaker embedding and noise representation as conditional inputs to model speaker and noise information respectively. Firstly, the speech synthesis model is pre-trained with both multi-speaker clean data and noisy augmented data; then the pre-trained model is adapted on noisy low-resource new speaker data; finally, by setting the clean speech condition, the model can synthesize the new speaker's clean voice. Experimental results show that the speech generated by the proposed approach has better subjective evaluation results than the method directly fine-tuning pre-trained multi-speaker speech synthesis model with denoised new speaker data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Dongyang Dai (9 papers)
  2. Li Chen (590 papers)
  3. Yuping Wang (56 papers)
  4. Mu Wang (29 papers)
  5. Rui Xia (53 papers)
  6. Xuchen Song (20 papers)
  7. Zhiyong Wu (171 papers)
  8. Yuxuan Wang (239 papers)
Citations (7)