CosyVoice 2: Scalable Streaming Speech Synthesis with Large Language Models (2412.10117v3)
Abstract: In our previous work, we introduced CosyVoice, a multilingual speech synthesis model based on supervised discrete speech tokens. By employing progressive semantic decoding with two popular generative models, LLMs (LMs) and Flow Matching, CosyVoice demonstrated high prosody naturalness, content consistency, and speaker similarity in speech in-context learning. Recently, significant progress has been made in multi-modal LLMs, where the response latency and real-time factor of speech synthesis play a crucial role in the interactive experience. Therefore, in this report, we present an improved streaming speech synthesis model, CosyVoice 2, which incorporates comprehensive and systematic optimizations. Specifically, we introduce finite-scalar quantization to improve the codebook utilization of speech tokens. For the text-speech LM, we streamline the model architecture to allow direct use of a pre-trained LLM as the backbone. In addition, we develop a chunk-aware causal flow matching model to support various synthesis scenarios, enabling both streaming and non-streaming synthesis within a single model. By training on a large-scale multilingual dataset, CosyVoice 2 achieves human-parity naturalness, minimal response latency, and virtually lossless synthesis quality in the streaming mode. We invite readers to listen to the demos at https://funaudioLLM.github.io/cosyvoice2.
- Zhihao Du (30 papers)
- Yuxuan Wang (239 papers)
- Qian Chen (264 papers)
- Xian Shi (50 papers)
- Xiang Lv (15 papers)
- Tianyu Zhao (73 papers)
- Zhifu Gao (28 papers)
- Yexin Yang (7 papers)
- Changfeng Gao (7 papers)
- Hui Wang (371 papers)
- Fan Yu (63 papers)
- Huadai Liu (14 papers)
- Zhengyan Sheng (6 papers)
- Yue Gu (24 papers)
- Chong Deng (22 papers)
- Wen Wang (144 papers)
- Shiliang Zhang (132 papers)
- Zhijie Yan (33 papers)
- Jingren Zhou (198 papers)