Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Speech Token Prediction via Compressed-to-fine Language Modeling for Speech Generation (2505.24496v1)

Published 30 May 2025 in eess.AS

Abstract: Neural audio codecs, used as speech tokenizers, have demonstrated remarkable potential in the field of speech generation. However, to ensure high-fidelity audio reconstruction, neural audio codecs typically encode audio into long sequences of speech tokens, posing a significant challenge for downstream LLMs in long-context modeling. We observe that speech token sequences exhibit short-range dependency: due to the monotonic alignment between text and speech in text-to-speech (TTS) tasks, the prediction of the current token primarily relies on its local context, while long-range tokens contribute less to the current token prediction and often contain redundant information. Inspired by this observation, we propose a \textbf{compressed-to-fine LLMing} approach to address the challenge of long sequence speech tokens within neural codec LLMs: (1) \textbf{Fine-grained Initial and Short-range Information}: Our approach retains the prompt and local tokens during prediction to ensure text alignment and the integrity of paralinguistic information; (2) \textbf{Compressed Long-range Context}: Our approach compresses long-range token spans into compact representations to reduce redundant information while preserving essential semantics. Extensive experiments on various neural audio codecs and downstream LLMs validate the effectiveness and generalizability of the proposed approach, highlighting the importance of token compression in improving speech generation within neural codec LLMs. The demo of audio samples will be available at https://anonymous.4open.science/r/SpeechTokenPredictionViaCompressedToFinedLM.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (14)
  1. Wenrui Liu (11 papers)
  2. Qian Chen (264 papers)
  3. Wen Wang (144 papers)
  4. Yafeng Chen (26 papers)
  5. Jin Xu (131 papers)
  6. Zhifang Guo (14 papers)
  7. Guanrou Yang (12 papers)
  8. Weiqin Li (7 papers)
  9. Xiaoda Yang (14 papers)
  10. Tao Jin (53 papers)
  11. Minghui Fang (17 papers)
  12. Jialong Zuo (22 papers)
  13. Bai Jionghao (3 papers)
  14. Zemin Liu (28 papers)

Summary

We haven't generated a summary for this paper yet.