Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TokenSplit: Using Discrete Speech Representations for Direct, Refined, and Transcript-Conditioned Speech Separation and Recognition (2308.10415v1)

Published 21 Aug 2023 in cs.SD, cs.LG, and eess.AS

Abstract: We present TokenSplit, a speech separation model that acts on discrete token sequences. The model is trained on multiple tasks simultaneously: separate and transcribe each speech source, and generate speech from text. The model operates on transcripts and audio token sequences and achieves multiple tasks through masking of inputs. The model is a sequence-to-sequence encoder-decoder model that uses the Transformer architecture. We also present a "refinement" version of the model that predicts enhanced audio tokens from the audio tokens of speech separated by a conventional separation model. Using both objective metrics and subjective MUSHRA listening tests, we show that our model achieves excellent performance in terms of separation, both with or without transcript conditioning. We also measure the automatic speech recognition (ASR) performance and provide audio samples of speech synthesis to demonstrate the additional utility of our model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Hakan Erdogan (32 papers)
  2. Scott Wisdom (33 papers)
  3. Xuankai Chang (61 papers)
  4. Zalán Borsos (18 papers)
  5. Marco Tagliasacchi (37 papers)
  6. Neil Zeghidour (39 papers)
  7. John R. Hershey (40 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.