Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language-Codec: Reducing the Gaps Between Discrete Codec Representation and Speech Language Models (2402.12208v3)

Published 19 Feb 2024 in eess.AS and cs.SD

Abstract: In recent years, LLMs have achieved significant success in generative tasks (e.g., speech cloning and audio generation) related to speech, audio, music, and other signal domains. A crucial element of these models is the discrete acoustic codecs, which serves as an intermediate representation replacing the mel-spectrogram. However, there exist several gaps between discrete codecs and downstream speech LLMs. Specifically, 1) most codec models are trained on only 1,000 hours of data, whereas most speech LLMs are trained on 60,000 hours; 2) Achieving good reconstruction performance requires the utilization of numerous codebooks, which increases the burden on downstream speech LLMs; 3) The initial channel of the codebooks contains excessive information, making it challenging to directly generate acoustic tokens from weakly supervised signals such as text in downstream tasks. Consequently, leveraging the characteristics of speech LLMs, we propose Language-Codec. In the Language-Codec, we introduce a Mask Channel Residual Vector Quantization (MCRVQ) mechanism along with improved Fourier transform structures and larger training datasets to address the aforementioned gaps. We compare our method with competing audio compression algorithms and observe significant outperformance across extensive evaluations. Furthermore, we also validate the efficiency of the Language-Codec on downstream speech LLMs. The source code and pre-trained models can be accessed at https://github.com/jishengpeng/languagecodec .

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Shengpeng Ji (26 papers)
  2. Minghui Fang (17 papers)
  3. Ziyue Jiang (38 papers)
  4. Rongjie Huang (62 papers)
  5. Jialung Zuo (1 paper)
  6. Shulei Wang (16 papers)
  7. Zhou Zhao (219 papers)
  8. Siqi Zheng (61 papers)
  9. Qian Chen (264 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.