Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

InSerter: Speech Instruction Following with Unsupervised Interleaved Pre-training (2503.02769v2)

Published 4 Mar 2025 in cs.SD, cs.CL, cs.HC, and eess.AS

Abstract: Recent advancements in speech LLMs (SpeechLLMs) have attracted considerable attention. Nonetheless, current methods exhibit suboptimal performance in adhering to speech instructions. Notably, the intelligence of models significantly diminishes when processing speech-form input as compared to direct text-form input. Prior work has attempted to mitigate this semantic inconsistency between speech and text representations through techniques such as representation and behavior alignment, which involve the meticulous design of data pairs during the post-training phase. In this paper, we introduce a simple and scalable training method called InSerter, which stands for Interleaved Speech-Text Representation Pre-training. InSerter is designed to pre-train large-scale unsupervised speech-text sequences, where the speech is synthesized from randomly selected segments of an extensive text corpus using text-to-speech conversion. Consequently, the model acquires the ability to generate textual continuations corresponding to the provided speech segments, obviating the need for intensive data design endeavors. To systematically evaluate speech instruction-following capabilities, we introduce SpeechInstructBench, the first comprehensive benchmark specifically designed for speech-oriented instruction-following tasks. Our proposed InSerter achieves SOTA performance in SpeechInstructBench and demonstrates superior or competitive results across diverse speech processing tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Dingdong Wang (7 papers)
  2. Jin Xu (131 papers)
  3. Ruihang Chu (18 papers)
  4. Zhifang Guo (14 papers)
  5. Xiong Wang (52 papers)
  6. Jincenzi Wu (5 papers)
  7. Dongchao Yang (51 papers)
  8. Shengpeng Ji (26 papers)
  9. Junyang Lin (99 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com