Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FSR: Accelerating the Inference Process of Transducer-Based Models by Applying Fast-Skip Regularization (2104.02882v1)

Published 7 Apr 2021 in eess.AS, cs.CL, and cs.SD

Abstract: Transducer-based models, such as RNN-Transducer and transformer-transducer, have achieved great success in speech recognition. A typical transducer model decodes the output sequence conditioned on the current acoustic state and previously predicted tokens step by step. Statistically, The number of blank tokens in the prediction results accounts for nearly 90\% of all tokens. It takes a lot of computation and time to predict the blank tokens, but only the non-blank tokens will appear in the final output sequence. Therefore, we propose a method named fast-skip regularization, which tries to align the blank position predicted by a transducer with that predicted by a CTC model. During the inference, the transducer model can predict the blank tokens in advance by a simple CTC project layer without many complicated forward calculations of the transducer decoder and then skip them, which will reduce the computation and improve the inference speed greatly. All experiments are conducted on a public Chinese mandarin dataset AISHELL-1. The results show that the fast-skip regularization can indeed help the transducer model learn the blank position alignments. Besides, the inference with fast-skip can be speeded up nearly 4 times with only a little performance degradation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhengkun Tian (24 papers)
  2. Jiangyan Yi (77 papers)
  3. Ye Bai (28 papers)
  4. Jianhua Tao (139 papers)
  5. Shuai Zhang (319 papers)
  6. Zhengqi Wen (69 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.