Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Less Is More: Improved RNN-T Decoding Using Limited Label Context and Path Merging (2012.06749v1)

Published 12 Dec 2020 in cs.CL, cs.LG, cs.SD, and eess.AS

Abstract: End-to-end models that condition the output label sequence on all previously predicted labels have emerged as popular alternatives to conventional systems for automatic speech recognition (ASR). Since unique label histories correspond to distinct models states, such models are decoded using an approximate beam-search process which produces a tree of hypotheses. In this work, we study the influence of the amount of label context on the model's accuracy, and its impact on the efficiency of the decoding process. We find that we can limit the context of the recurrent neural network transducer (RNN-T) during training to just four previous word-piece labels, without degrading word error rate (WER) relative to the full-context baseline. Limiting context also provides opportunities to improve the efficiency of the beam-search process during decoding by removing redundant paths from the active beam, and instead retaining them in the final lattice. This path-merging scheme can also be applied when decoding the baseline full-context model through an approximation. Overall, we find that the proposed path-merging scheme is extremely effective allowing us to improve oracle WERs by up to 36% over the baseline, while simultaneously reducing the number of model evaluations by up to 5.3% without any degradation in WER.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Rohit Prabhavalkar (59 papers)
  2. Yanzhang He (41 papers)
  3. David Rybach (19 papers)
  4. Sean Campbell (4 papers)
  5. Arun Narayanan (34 papers)
  6. Trevor Strohman (38 papers)
  7. Tara N. Sainath (79 papers)
Citations (35)

Summary

We haven't generated a summary for this paper yet.