Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Recognizing long-form speech using streaming end-to-end models (1910.11455v1)

Published 24 Oct 2019 in eess.AS, cs.CL, and cs.SD

Abstract: All-neural end-to-end (E2E) automatic speech recognition (ASR) systems that use a single neural network to transduce audio to word sequences have been shown to achieve state-of-the-art results on several tasks. In this work, we examine the ability of E2E models to generalize to unseen domains, where we find that models trained on short utterances fail to generalize to long-form speech. We propose two complementary solutions to address this: training on diverse acoustic data, and LSTM state manipulation to simulate long-form audio when training using short utterances. On a synthesized long-form test set, adding data diversity improves word error rate (WER) by 90% relative, while simulating long-form training improves it by 67% relative, though the combination doesn't improve over data diversity alone. On a real long-form call-center test set, adding data diversity improves WER by 40% relative. Simulating long-form training on top of data diversity improves performance by an additional 27% relative.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Arun Narayanan (34 papers)
  2. Rohit Prabhavalkar (59 papers)
  3. Chung-Cheng Chiu (48 papers)
  4. David Rybach (19 papers)
  5. Tara N. Sainath (79 papers)
  6. Trevor Strohman (38 papers)
Citations (125)