Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Synchronous Transformers for End-to-End Speech Recognition (1912.02958v2)

Published 6 Dec 2019 in eess.AS, cs.CL, and cs.LG

Abstract: For most of the attention-based sequence-to-sequence models, the decoder predicts the output sequence conditioned on the entire input sequence processed by the encoder. The asynchronous problem between the encoding and decoding makes these models difficult to be applied for online speech recognition. In this paper, we propose a model named synchronous transformer to address this problem, which can predict the output sequence chunk by chunk. Once a fixed-length chunk of the input sequence is processed by the encoder, the decoder begins to predict symbols immediately. During training, a forward-backward algorithm is introduced to optimize all the possible alignment paths. Our model is evaluated on a Mandarin dataset AISHELL-1. The experiments show that the synchronous transformer is able to perform encoding and decoding synchronously, and achieves a character error rate of 8.91% on the test set.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhengkun Tian (24 papers)
  2. Jiangyan Yi (77 papers)
  3. Ye Bai (28 papers)
  4. Jianhua Tao (139 papers)
  5. Shuai Zhang (319 papers)
  6. Zhengqi Wen (69 papers)
Citations (71)

Summary

We haven't generated a summary for this paper yet.