Improving Language Model-Based Zero-Shot Text-to-Speech Synthesis with Multi-Scale Acoustic Prompts (2309.11977v3)
Abstract: Zero-shot text-to-speech (TTS) synthesis aims to clone any unseen speaker's voice without adaptation parameters. By quantizing speech waveform into discrete acoustic tokens and modeling these tokens with the LLM, recent LLM-based TTS models show zero-shot speaker adaptation capabilities with only a 3-second acoustic prompt of an unseen speaker. However, they are limited by the length of the acoustic prompt, which makes it difficult to clone personal speaking style. In this paper, we propose a novel zero-shot TTS model with the multi-scale acoustic prompts based on a neural codec LLM VALL-E. A speaker-aware text encoder is proposed to learn the personal speaking style at the phoneme-level from the style prompt consisting of multiple sentences. Following that, a VALL-E based acoustic decoder is utilized to model the timbre from the timbre prompt at the frame-level and generate speech. The experimental results show that our proposed method outperforms baselines in terms of naturalness and speaker similarity, and can achieve better performance by scaling out to a longer style prompt.
- Shun Lei (21 papers)
- Yixuan Zhou (30 papers)
- Liyang Chen (33 papers)
- Dan Luo (25 papers)
- Zhiyong Wu (171 papers)
- Xixin Wu (85 papers)
- Shiyin Kang (27 papers)
- Tao Jiang (274 papers)
- Yahui Zhou (18 papers)
- Yuxing Han (40 papers)
- Helen Meng (204 papers)