Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SyllableLM: Learning Coarse Semantic Units for Speech Language Models (2410.04029v1)

Published 5 Oct 2024 in cs.CL, cs.AI, and eess.AS

Abstract: LLMs require tokenized inputs. However, tokenization strategies for continuous data like audio and vision are often based on simple heuristics such as fixed sized convolutions or discrete clustering, which do not necessarily align with the semantic structure of the data. For speech in particular, the high resolution of waveforms (16,000 samples/second or more) presents a significant challenge as speech-based LLMs have had to use several times more tokens per word than text-based LLMs. In this work, we introduce a controllable self-supervised technique to merge speech representations into coarser syllable-like units while still preserving semantic information. We do this by 1) extracting noisy boundaries through analyzing correlations in pretrained encoder losses and 2) iteratively improving model representations with a novel distillation technique. Our method produces controllable-rate semantic units at as low as 5Hz and 60bps and achieves SotA in syllabic segmentation and clustering. Using these coarse tokens, we successfully train SyllableLM, a Speech LLM (SpeechLM) that matches or outperforms current SotA SpeechLMs on a range of spoken LLMing tasks. SyllableLM also achieves significant improvements in efficiency with a 30x reduction in training compute and a 4x wall-clock inference speedup.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Alan Baade (2 papers)
  2. Puyuan Peng (21 papers)
  3. David Harwath (55 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.