Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VALL-T: Decoder-Only Generative Transducer for Robust and Decoding-Controllable Text-to-Speech (2401.14321v4)

Published 25 Jan 2024 in eess.AS and cs.SD

Abstract: Recent TTS models with decoder-only Transformer architecture, such as SPEAR-TTS and VALL-E, achieve impressive naturalness and demonstrate the ability for zero-shot adaptation given a speech prompt. However, such decoder-only TTS models lack monotonic alignment constraints, sometimes leading to hallucination issues such as mispronunciation, word skipping and repeating. To address this limitation, we propose VALL-T, a generative Transducer model that introduces shifting relative position embeddings for input phoneme sequence, explicitly indicating the monotonic generation process while maintaining the architecture of decoder-only Transformer. Consequently, VALL-T retains the capability of prompt-based zero-shot adaptation and demonstrates better robustness against hallucinations with a relative reduction of 28.3% in the word error rate. Furthermore, the controllability of alignment in VALL-T during decoding facilitates the use of untranscribed speech prompts, even in unknown languages. It also enables the synthesis of lengthy speech by utilizing an aligned context window.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Chenpeng Du (28 papers)
  2. Yiwei Guo (29 papers)
  3. Hankun Wang (11 papers)
  4. Yifan Yang (578 papers)
  5. Zhikang Niu (11 papers)
  6. Shuai Wang (466 papers)
  7. Hui Zhang (405 papers)
  8. Xie Chen (165 papers)
  9. Kai Yu (201 papers)
Citations (19)
X Twitter Logo Streamline Icon: https://streamlinehq.com