Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Token-Level Serialized Output Training for Joint Streaming ASR and ST Leveraging Textual Alignments (2307.03354v2)

Published 7 Jul 2023 in cs.CL, cs.SD, and eess.AS

Abstract: In real-world applications, users often require both translations and transcriptions of speech to enhance their comprehension, particularly in streaming scenarios where incremental generation is necessary. This paper introduces a streaming Transformer-Transducer that jointly generates automatic speech recognition (ASR) and speech translation (ST) outputs using a single decoder. To produce ASR and ST content effectively with minimal latency, we propose a joint token-level serialized output training method that interleaves source and target words by leveraging an off-the-shelf textual aligner. Experiments in monolingual (it-en) and multilingual ({de,es,it}-en) settings demonstrate that our approach achieves the best quality-latency balance. With an average ASR latency of 1s and ST latency of 1.3s, our model shows no degradation or even improves output quality compared to separate ASR and ST models, yielding an average improvement of 1.1 WER and 0.4 BLEU in the multilingual case.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Sara Papi (33 papers)
  2. Peidong Wang (33 papers)
  3. Junkun Chen (27 papers)
  4. Jian Xue (30 papers)
  5. Jinyu Li (164 papers)
  6. Yashesh Gaur (43 papers)
Citations (7)