Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning (1710.07654v3)

Published 20 Oct 2017 in cs.SD, cs.AI, cs.CL, cs.LG, and eess.AS

Abstract: We present Deep Voice 3, a fully-convolutional attention-based neural text-to-speech (TTS) system. Deep Voice 3 matches state-of-the-art neural speech synthesis systems in naturalness while training ten times faster. We scale Deep Voice 3 to data set sizes unprecedented for TTS, training on more than eight hundred hours of audio from over two thousand speakers. In addition, we identify common error modes of attention-based speech synthesis networks, demonstrate how to mitigate them, and compare several different waveform synthesis methods. We also describe how to scale inference to ten million queries per day on one single-GPU server.

Citations (298)

Summary

  • The paper introduces a fully-convolutional character-to-spectrogram architecture that leverages attention to synthesize human-like speech.
  • It achieves an order-of-magnitude faster training speed by exploiting GPU parallelism through convolutional layers instead of traditional RNNs.
  • The study compares waveform synthesis methods, showing that while WaveNet offers superior naturalness, alternatives like WORLD provide practical speed advantages.

Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning

Deep Voice 3 introduces a novel fully-convolutional attention-based architecture for neural Text-to-Speech (TTS) systems, providing an efficient solution to the task of generating human-like speech from text. The paper presents a significant advancement in the field of TTS by delivering enhanced naturalness in synthesized speech while achieving faster training times compared to existing models. It is a continuation of previous work on Deep Voice systems, which gradually transition TTS architectures from complex, multi-stage processes to streamlined neural approaches.

The Deep Voice 3 model capitalizes on convolutional sequence learning combined with an attention mechanism to process textual input and generate audio output. Key contributions of the paper include the introduction of a fully-convolutional character-to-spectrogram architecture that allows for parallelized computations, leading to a training speed that is an order of magnitude faster than its predecessors using recurrent cells. This speedup is attributed to the use of convolutional layers which can exploit GPU parallelism more effectively than sequential RNN structures.

For training, Deep Voice 3 demonstrates scalability by processing the LibriSpeech dataset, which consists of 820 hours of audio from over 2,484 speakers. This level of data-driven approach enables the model to learn from a diverse set of voices, ensuring adaptability and robustness across various speech scenarios. Moreover, the paper identifies common inaccuracies in attention-based speech synthesis and proposes strategies to mitigate them, including enforcing monotonic attention to improve robustness against errors like skipping or repeating words.

The research further explores different waveform synthesis methods to convert the generated spectrograms into audio. A comparison of the WORLD, Griffin-Lim, and WaveNet synthesis methods is conducted. Although WaveNet yields the highest Mean Opinion Score (MOS), which reflects the naturalness of synthesized speech, the paper highlights the practicality of looking into alternative methods like WORLD due to its inference speed advantages. Additionally, the model is capable of serving up to ten million queries per day with efficient GPU resource usage, demonstrating its potential for real-world deployment in high-demand environments.

Implications of this research span both theoretical and practical domains. Theoretically, it pushes the boundaries of what is achievable with fully-convolutional architectures in sequence-to-sequence learning, particularly in TTS. This contributes to the ongoing discussion regarding optimal neural network topologies for handling large-scale parallel computations.

Practically, Deep Voice 3's scalability and efficiency position it as a viable candidate for deployment in various applications that require high-quality TTS solutions—ranging from voice assistants to accessibility tools. Looking forward, there are compelling avenues to explore such as jointly training this model with neural vocoders for potentially improved quality, as well as leveraging larger and cleaner datasets to expand the model's versatility across different accents and languages. These efforts could further refine neural TTS systems, inching closer to indistinguishable naturalness in synthesized human speech.

Youtube Logo Streamline Icon: https://streamlinehq.com