Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Make-A-Voice: Unified Voice Synthesis With Discrete Representation (2305.19269v1)

Published 30 May 2023 in eess.AS, cs.AI, cs.CL, and cs.SD

Abstract: Various applications of voice synthesis have been developed independently despite the fact that they generate "voice" as output in common. In addition, the majority of voice synthesis models currently rely on annotated audio data, but it is crucial to scale them to self-supervised datasets in order to effectively capture the wide range of acoustic variations present in human voice, including speaker identity, emotion, and prosody. In this work, we propose Make-A-Voice, a unified framework for synthesizing and manipulating voice signals from discrete representations. Make-A-Voice leverages a "coarse-to-fine" approach to model the human voice, which involves three stages: 1) semantic stage: model high-level transformation between linguistic content and self-supervised semantic tokens, 2) acoustic stage: introduce varying control signals as acoustic conditions for semantic-to-acoustic modeling, and 3) generation stage: synthesize high-fidelity waveforms from acoustic tokens. Make-A-Voice offers notable benefits as a unified voice synthesis framework: 1) Data scalability: the major backbone (i.e., acoustic and generation stage) does not require any annotations, and thus the training data could be scaled up. 2) Controllability and conditioning flexibility: we investigate different conditioning mechanisms and effectively handle three voice synthesis applications, including text-to-speech (TTS), voice conversion (VC), and singing voice synthesis (SVS) by re-synthesizing the discrete voice representations with prompt guidance. Experimental results demonstrate that Make-A-Voice exhibits superior audio quality and style similarity compared with competitive baseline models. Audio samples are available at https://Make-A-Voice.github.io

Citations (25)

Summary

  • The paper introduces a novel unified framework, Make-A-Voice, that integrates TTS, VC, and SVS via a coarse-to-fine approach using discrete representations.
  • It employs semantic-to-acoustic token conversion without annotated data, enabling scalable training and fine control over varied acoustic properties.
  • Experimental results demonstrate superior audio quality and enhanced expressiveness, outperforming competitive baselines in voice synthesis tasks.

The paper "Make-A-Voice: Unified Voice Synthesis With Discrete Representation" introduces a novel framework for voice synthesis, termed Make-A-Voice. This framework is particularly noteworthy for its ability to integrate various applications of voice synthesis, such as text-to-speech (TTS), voice conversion (VC), and singing voice synthesis (SVS), into a single unified system. The core strategy of Make-A-Voice involves synthesizing and manipulating voice signals from discrete representations, allowing for a versatile and scalable approach to voice synthesis.

Key Features and Methodology

  1. Coarse-to-Fine Approach:
    • Semantic Stage: This initial stage focuses on transforming linguistic content into self-supervised semantic tokens. The transformation process is guided by high-level semantic understanding, which helps in capturing the nuances of the input text.
    • Acoustic Stage: The system introduces variable control signals at this stage, serving as acoustic conditions for converting semantic tokens into acoustic tokens. This flexibility is crucial for adjusting voice synthesis based on different acoustic properties such as speaker identity or emotional expression.
    • Generation Stage: In the final stage, the framework synthesizes high-fidelity waveforms from the acoustic tokens, resulting in high-quality audio output.
  2. Data Scalability:
    • A significant advantage of Make-A-Voice is that its major components—specifically the acoustic and generation stages—do not require annotated data. This means the system can be trained on much larger datasets, potentially capturing a wider variety of human vocal characteristics.
  3. Controllability and Flexibility:
    • The framework allows for varied conditioning mechanisms that enable fine control over the voice synthesis process. This is essential for applications like TTS, VC, and SVS, where different aspects of the voice must be tailored according to specific requirements.

Experimental Validation

The experiments conducted demonstrate that Make-A-Voice achieves superior audio quality and maintains style similarity when compared to existing competitive baseline models. This suggests that Make-A-Voice effectively utilizes its discrete representation method to capture the essential characteristics of human voice synthesis, offering improvements in both fidelity and expressiveness.

Applications and Implications

Make-A-Voice's framework, with its unified approach and scalability, represents a significant step forward in voice synthesis technology. By not relying on annotated datasets and offering extensive controllability, it paves the way for more advanced applications of synthetic voice generation that can better mimic the diversity of human speech patterns.

This approach holds promise for enhancing applications across a range of domains where synthetic voice is required, broadening the scope of how such technologies can be applied in realistic settings.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub