Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Developing RNN-T Models Surpassing High-Performance Hybrid Models with Customization Capability (2007.15188v1)

Published 30 Jul 2020 in eess.AS, cs.CL, and cs.SD

Abstract: Because of its streaming nature, recurrent neural network transducer (RNN-T) is a very promising end-to-end (E2E) model that may replace the popular hybrid model for automatic speech recognition. In this paper, we describe our recent development of RNN-T models with reduced GPU memory consumption during training, better initialization strategy, and advanced encoder modeling with future lookahead. When trained with Microsoft's 65 thousand hours of anonymized training data, the developed RNN-T model surpasses a very well trained hybrid model with both better recognition accuracy and lower latency. We further study how to customize RNN-T models to a new domain, which is important for deploying E2E models to practical scenarios. By comparing several methods leveraging text-only data in the new domain, we found that updating RNN-T's prediction and joint networks using text-to-speech generated from domain-specific text is the most effective.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Jinyu Li (164 papers)
  2. Rui Zhao (241 papers)
  3. Zhong Meng (53 papers)
  4. Yanqing Liu (48 papers)
  5. Wenning Wei (10 papers)
  6. Sarangarajan Parthasarathy (9 papers)
  7. Vadim Mazalov (5 papers)
  8. Zhenghao Wang (5 papers)
  9. Lei He (121 papers)
  10. Sheng Zhao (75 papers)
  11. Yifan Gong (82 papers)
Citations (106)

Summary

We haven't generated a summary for this paper yet.