Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accelerating RNN-T Training and Inference Using CTC guidance (2210.16481v1)

Published 29 Oct 2022 in eess.AS, cs.CL, and cs.SD

Abstract: We propose a novel method to accelerate training and inference process of recurrent neural network transducer (RNN-T) based on the guidance from a co-trained connectionist temporal classification (CTC) model. We made a key assumption that if an encoder embedding frame is classified as a blank frame by the CTC model, it is likely that this frame will be aligned to blank for all the partial alignments or hypotheses in RNN-T and it can be discarded from the decoder input. We also show that this frame reduction operation can be applied in the middle of the encoder, which result in significant speed up for the training and inference in RNN-T. We further show that the CTC alignment, a by-product of the CTC decoder, can also be used to perform lattice reduction for RNN-T during training. Our method is evaluated on the Librispeech and SpeechStew tasks. We demonstrate that the proposed method is able to accelerate the RNN-T inference by 2.2 times with similar or slightly better word error rates (WER).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yongqiang Wang (92 papers)
  2. Zhehuai Chen (39 papers)
  3. Chengjian Zheng (9 papers)
  4. Yu Zhang (1400 papers)
  5. Wei Han (202 papers)
  6. Parisa Haghani (15 papers)
Citations (22)

Summary

We haven't generated a summary for this paper yet.