Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Tail Performance of a Deliberation E2E ASR Model Using a Large Text Corpus (2008.10491v2)

Published 24 Aug 2020 in eess.AS and cs.LG

Abstract: End-to-end (E2E) automatic speech recognition (ASR) systems lack the distinct LLM (LM) component that characterizes traditional speech systems. While this simplifies the model architecture, it complicates the task of incorporating text-only data into training, which is important to the recognition of tail words that do not occur often in audio-text pairs. While shallow fusion has been proposed as a method for incorporating a pre-trained LM into an E2E model at inference time, it has not yet been explored for very large text corpora, and it has been shown to be very sensitive to hyperparameter settings in the beam search. In this work, we apply shallow fusion to incorporate a very large text corpus into a state-of-the-art E2EASR model. We explore the impact of model size and show that intelligent pruning of the training set can be more effective than increasing the parameter count. Additionally, we show that incorporating the LM in minimum word error rate (MWER) fine tuning makes shallow fusion far less dependent on optimal hyperparameter settings, reducing the difficulty of that tuning problem.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Cal Peyser (14 papers)
  2. Sepand Mavandadi (5 papers)
  3. Tara N. Sainath (79 papers)
  4. James Apfel (2 papers)
  5. Ruoming Pang (59 papers)
  6. Shankar Kumar (34 papers)
Citations (46)