Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Rare Word Recognition with LM-aware MWER Training (2204.07553v2)

Published 15 Apr 2022 in cs.CL, cs.SD, and eess.AS

Abstract: LLMs (LMs) significantly improve the recognition accuracy of end-to-end (E2E) models on words rarely seen during training, when used in either the shallow fusion or the rescoring setups. In this work, we introduce LMs in the learning of hybrid autoregressive transducer (HAT) models in the discriminative training framework, to mitigate the training versus inference gap regarding the use of LMs. For the shallow fusion setup, we use LMs during both hypotheses generation and loss computation, and the LM-aware MWER-trained model achieves 10\% relative improvement over the model trained with standard MWER on voice search test sets containing rare words. For the rescoring setup, we learn a small neural module to generate per-token fusion weights in a data-dependent manner. This model achieves the same rescoring WER as regular MWER-trained model, but without the need for sweeping fusion weights.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Weiran Wang (65 papers)
  2. Tongzhou Chen (7 papers)
  3. Tara N. Sainath (79 papers)
  4. Ehsan Variani (13 papers)
  5. Rohit Prabhavalkar (59 papers)
  6. Ronny Huang (5 papers)
  7. Bhuvana Ramabhadran (47 papers)
  8. Neeraj Gaur (7 papers)
  9. Sepand Mavandadi (5 papers)
  10. Cal Peyser (14 papers)
  11. Trevor Strohman (38 papers)
  12. Yanzhang He (41 papers)
  13. David Rybach (19 papers)
Citations (13)