Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improved Neural Language Model Fusion for Streaming Recurrent Neural Network Transducer (2010.13878v1)

Published 26 Oct 2020 in cs.CL

Abstract: Recurrent Neural Network Transducer (RNN-T), like most end-to-end speech recognition model architectures, has an implicit neural network LLM (NNLM) and cannot easily leverage unpaired text data during training. Previous work has proposed various fusion methods to incorporate external NNLMs into end-to-end ASR to address this weakness. In this paper, we propose extensions to these techniques that allow RNN-T to exploit external NNLMs during both training and inference time, resulting in 13-18% relative Word Error Rate improvement on Librispeech compared to strong baselines. Furthermore, our methods do not incur extra algorithmic latency and allow for flexible plug-and-play of different NNLMs without re-training. We also share in-depth analysis to better understand the benefits of the different NNLM fusion methods. Our work provides a reliable technique for leveraging unpaired text data to significantly improve RNN-T while keeping the system streamable, flexible, and lightweight.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Suyoun Kim (22 papers)
  2. Yuan Shangguan (25 papers)
  3. Jay Mahadeokar (36 papers)
  4. Antoine Bruguier (10 papers)
  5. Christian Fuegen (36 papers)
  6. Michael L. Seltzer (34 papers)
  7. Duc Le (46 papers)
Citations (27)

Summary

We haven't generated a summary for this paper yet.