Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Phonetic Enhanced Language Modeling for Text-to-Speech Synthesis (2406.02009v2)

Published 4 Jun 2024 in eess.AS, cs.CL, and cs.SD

Abstract: Recent LLM-based text-to-speech (TTS) frameworks demonstrate scalability and in-context learning capabilities. However, they suffer from robustness issues due to the accumulation of errors in speech unit predictions during autoregressive LLMing. In this paper, we propose a phonetic enhanced LLMing method to improve the performance of TTS models. We leverage self-supervised representations that are phonetically rich as the training target for the autoregressive LLM. Subsequently, a non-autoregressive model is employed to predict discrete acoustic codecs that contain fine-grained acoustic details. The TTS model focuses solely on linguistic modeling during autoregressive training, thereby reducing the error propagation that occurs in non-autoregressive training. Both objective and subjective evaluations validate the effectiveness of our proposed method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Kun Zhou (217 papers)
  2. Shengkui Zhao (21 papers)
  3. Yukun Ma (33 papers)
  4. Chong Zhang (137 papers)
  5. Hao Wang (1119 papers)
  6. Dianwen Ng (21 papers)
  7. Chongjia Ni (18 papers)
  8. Nguyen Trung Hieu (3 papers)
  9. Jia Qi Yip (20 papers)
  10. Bin Ma (78 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com