Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TSNAT: Two-Step Non-Autoregressvie Transformer Models for Speech Recognition (2104.01522v1)

Published 4 Apr 2021 in eess.AS and cs.CL

Abstract: The autoregressive (AR) models, such as attention-based encoder-decoder models and RNN-Transducer, have achieved great success in speech recognition. They predict the output sequence conditioned on the previous tokens and acoustic encoded states, which is inefficient on GPUs. The non-autoregressive (NAR) models can get rid of the temporal dependency between the output tokens and predict the entire output tokens in at least one step. However, the NAR model still faces two major problems. On the one hand, there is still a great gap in performance between the NAR models and the advanced AR models. On the other hand, it's difficult for most of the NAR models to train and converge. To address these two problems, we propose a new model named the two-step non-autoregressive transformer(TSNAT), which improves the performance and accelerating the convergence of the NAR model by learning prior knowledge from a parameters-sharing AR model. Furthermore, we introduce the two-stage method into the inference process, which improves the model performance greatly. All the experiments are conducted on a public Chinese mandarin dataset ASIEHLL-1. The results show that the TSNAT can achieve a competitive performance with the AR model and outperform many complicated NAR models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zhengkun Tian (24 papers)
  2. Jiangyan Yi (77 papers)
  3. Jianhua Tao (139 papers)
  4. Ye Bai (28 papers)
  5. Shuai Zhang (319 papers)
  6. Zhengqi Wen (69 papers)
  7. Xuefei Liu (24 papers)
Citations (17)