Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scaling Speech-Text Pre-training with Synthetic Interleaved Data (2411.17607v2)

Published 26 Nov 2024 in cs.CL, cs.SD, and eess.AS

Abstract: Speech LLMs (SpeechLMs) accept speech input and produce speech output, allowing for more natural human-computer interaction compared to text-based LLMs. Traditional approaches for developing SpeechLMs are constrained by the limited availability of unsupervised speech data and parallel speech-text data, which are significantly less abundant than text pre-training data, thereby limiting their scalability as LLMs. We propose a novel approach to scaling speech-text pre-training by leveraging large-scale synthetic interleaved data derived from text corpora, eliminating the need for parallel speech-text datasets. Our method efficiently constructs speech-text interleaved data by sampling text spans from existing text corpora and synthesizing corresponding speech spans using a text-to-token model, bypassing the need to generate actual speech. We also employ a supervised speech tokenizer derived from an automatic speech recognition (ASR) model by incorporating a vector-quantized bottleneck into the encoder. This supervised training approach results in discrete speech tokens with strong semantic preservation even at lower frame rates (e.g. 12.5Hz), while still maintaining speech reconstruction quality. Starting from a pre-trained LLM and scaling our pre-training to 1 trillion tokens (with 600B synthetic interleaved speech-text data), we achieve state-of-the-art performance in speech LLMing and spoken question answering, improving performance on spoken questions tasks from the previous SOTA of 13% (Moshi) to 31%. We further demonstrate that by fine-tuning the pre-trained model with speech dialogue data, we can develop an end-to-end spoken chatbot that achieves competitive performance comparable to existing baselines in both conversational abilities and speech quality, even operating exclusively in the speech domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Aohan Zeng (19 papers)
  2. Zhengxiao Du (22 papers)
  3. Mingdao Liu (5 papers)
  4. Lei Zhang (1689 papers)
  5. Shengmin Jiang (2 papers)
  6. Yuxiao Dong (119 papers)
  7. Jie Tang (302 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.