Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CosyVoice 2: Scalable Streaming Speech Synthesis with Large Language Models (2412.10117v3)

Published 13 Dec 2024 in cs.SD, cs.AI, cs.LG, and eess.AS

Abstract: In our previous work, we introduced CosyVoice, a multilingual speech synthesis model based on supervised discrete speech tokens. By employing progressive semantic decoding with two popular generative models, LLMs (LMs) and Flow Matching, CosyVoice demonstrated high prosody naturalness, content consistency, and speaker similarity in speech in-context learning. Recently, significant progress has been made in multi-modal LLMs, where the response latency and real-time factor of speech synthesis play a crucial role in the interactive experience. Therefore, in this report, we present an improved streaming speech synthesis model, CosyVoice 2, which incorporates comprehensive and systematic optimizations. Specifically, we introduce finite-scalar quantization to improve the codebook utilization of speech tokens. For the text-speech LM, we streamline the model architecture to allow direct use of a pre-trained LLM as the backbone. In addition, we develop a chunk-aware causal flow matching model to support various synthesis scenarios, enabling both streaming and non-streaming synthesis within a single model. By training on a large-scale multilingual dataset, CosyVoice 2 achieves human-parity naturalness, minimal response latency, and virtually lossless synthesis quality in the streaming mode. We invite readers to listen to the demos at https://funaudioLLM.github.io/cosyvoice2.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (19)
  1. Zhihao Du (30 papers)
  2. Yuxuan Wang (239 papers)
  3. Qian Chen (264 papers)
  4. Xian Shi (50 papers)
  5. Xiang Lv (15 papers)
  6. Tianyu Zhao (73 papers)
  7. Zhifu Gao (28 papers)
  8. Yexin Yang (7 papers)
  9. Changfeng Gao (7 papers)
  10. Hui Wang (371 papers)
  11. Fan Yu (63 papers)
  12. Huadai Liu (14 papers)
  13. Zhengyan Sheng (6 papers)
  14. Yue Gu (24 papers)
  15. Chong Deng (22 papers)
  16. Wen Wang (144 papers)
  17. Shiliang Zhang (132 papers)
  18. Zhijie Yan (33 papers)
  19. Jingren Zhou (198 papers)
Citations (1)

Summary

An Overview of CosyVoice 2: Scalable Streaming Speech Synthesis with LLMs

The paper "CosyVoice 2: Scalable Streaming Speech Synthesis with LLMs" presents an evolved version of a zero-shot text-to-speech (TTS) synthesis model, building upon the foundational work of CosyVoice. With increasing interest and advancements in multi-modal LLMs, this paper explores enhancements that address real-time interaction demands through effective streaming synthesis.

Innovations in Architecture and Techniques

CosyVoice 2 introduces several architectural and methodological changes to improve the efficacy of TTS models. One of the primary enhancements is the use of finite scalar quantization (FSQ), improving codebook utilization in speech tokenization. FSQ's ability to fully exploit the codebook capacity ostensibly leads to enhanced semantic information retention, crucial for natural speech synthesis.

The text-to-speech LLM undergoes significant restructuring too. The authors simplify the model architecture by removing the text encoder and speaker embeddings, allowing direct utilization of pre-trained LLMs as a backbone. This change aims to enhance the model's ability to align speech tokens with text and leverage existing LLM capabilities for improved context understanding.

CosyVoice 2 unifies the synthesis process for streaming and non-streaming scenarios, achieved through a hybrid text-speech LLM and a chunk-aware causal flow matching model. This enables seamless switching between modes with virtually lossless quality, accommodating the highly variable latency requirements of real-time applications.

Evaluation and Performance

The authors extensively evaluate the performance of CosyVoice 2 across several benchmarks. The model achieves impressive content consistency (WER) and speaker similarity (SS) metrics when compared to both its predecessor and contemporary TTS models, such as ChatTTS and GPT-SoVITs. Notably, it exhibits human-parity synthesis quality, with several metrics even surpassing those of natural human speech in controlled settings.

Moreover, the paper presents an evaluation of the model's capacity for instructed generation. Here, CosyVoice 2 demonstrates adaptability to various linguistic instructions, emotional expressions, and speaking styles, setting new standards in expressive TTS synthesis without sacrificing coherence or intelligibility.

Implications and Future Directions

CosyVoice 2 demonstrates the feasibility of leveraging state-of-the-art LLMs to produce high-fidelity, contextually consistent speech. The paper underscores the use of quantization strategies and LLM integration as effective techniques for enhancing TTS systems. The unified approach towards streaming and non-streaming synthesis within a single model architecture pioneers possibilities for more responsive, adaptive voice interaction systems in real-time environments such as voice-driven interfaces or virtual assistants.

Looking forward, the scalability and extensibility of CosyVoice 2's framework suggest potential future applications, including multilingual synthesis and more nuanced paralinguistic controls such as rhythm and intonation patterns. Addressing current limitations, such as language coverage and acoustic characteristic control, remain open research questions that could significantly propel the capabilities of TTS models.

In conclusion, CosyVoice 2 emerges as a notable advancement in the domain of speech synthesis, adeptly marrying the inherent capabilities of LLMs with intricate speech generation processes to meet contemporary needs for real-time, natural, and expressive techniques.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub

Youtube Logo Streamline Icon: https://streamlinehq.com