Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Streaming Automatic Speech Recognition With Non-Streaming Model Distillation On Unsupervised Data (2010.12096v2)

Published 22 Oct 2020 in cs.SD, cs.CL, and eess.AS

Abstract: Streaming end-to-end automatic speech recognition (ASR) models are widely used on smart speakers and on-device applications. Since these models are expected to transcribe speech with minimal latency, they are constrained to be causal with no future context, compared to their non-streaming counterparts. Consequently, streaming models usually perform worse than non-streaming models. We propose a novel and effective learning method by leveraging a non-streaming ASR model as a teacher to generate transcripts on an arbitrarily large data set, which is then used to distill knowledge into streaming ASR models. This way, we scale the training of streaming models to up to 3 million hours of YouTube audio. Experiments show that our approach can significantly reduce the word error rate (WER) of RNNT models not only on LibriSpeech but also on YouTube data in four languages. For example, in French, we are able to reduce the WER by 16.4% relatively to a baseline streaming model by leveraging a non-streaming teacher model trained on the same amount of labeled data as the baseline.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Thibault Doutre (3 papers)
  2. Wei Han (202 papers)
  3. Min Ma (14 papers)
  4. Zhiyun Lu (19 papers)
  5. Chung-Cheng Chiu (48 papers)
  6. Ruoming Pang (59 papers)
  7. Arun Narayanan (34 papers)
  8. Ananya Misra (4 papers)
  9. Yu Zhang (1400 papers)
  10. Liangliang Cao (52 papers)
Citations (22)

Summary

We haven't generated a summary for this paper yet.