Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adapter-Based Extension of Multi-Speaker Text-to-Speech Model for New Speakers (2211.00585v1)

Published 1 Nov 2022 in eess.AS, cs.LG, and cs.SD

Abstract: Fine-tuning is a popular method for adapting text-to-speech (TTS) models to new speakers. However this approach has some challenges. Usually fine-tuning requires several hours of high quality speech per speaker. There is also that fine-tuning will negatively affect the quality of speech synthesis for previously learnt speakers. In this paper we propose an alternative approach for TTS adaptation based on using parameter-efficient adapter modules. In the proposed approach, a few small adapter modules are added to the original network. The original weights are frozen, and only the adapters are fine-tuned on speech for new speaker. The parameter-efficient fine-tuning approach will produce a new model with high level of parameter sharing with original model. Our experiments on LibriTTS, HiFi-TTS and VCTK datasets validate the effectiveness of adapter-based method through objective and subjective metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Cheng-Ping Hsieh (9 papers)
  2. Subhankar Ghosh (41 papers)
  3. Boris Ginsburg (111 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.