Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transformer-Transducer: End-to-End Speech Recognition with Self-Attention (1910.12977v1)

Published 28 Oct 2019 in eess.AS, cs.CL, and cs.SD

Abstract: We explore options to use Transformer networks in neural transducer for end-to-end speech recognition. Transformer networks use self-attention for sequence modeling and comes with advantages in parallel computation and capturing contexts. We propose 1) using VGGNet with causal convolution to incorporate positional information and reduce frame rate for efficient inference 2) using truncated self-attention to enable streaming for Transformer and reduce computational complexity. All experiments are conducted on the public LibriSpeech corpus. The proposed Transformer-Transducer outperforms neural transducer with LSTM/BLSTM networks and achieved word error rates of 6.37 % on the test-clean set and 15.30 % on the test-other set, while remaining streamable, compact with 45.7M parameters for the entire system, and computationally efficient with complexity of O(T), where T is input sequence length.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Ching-Feng Yeh (22 papers)
  2. Jay Mahadeokar (36 papers)
  3. Kaustubh Kalgaonkar (6 papers)
  4. Yongqiang Wang (92 papers)
  5. Duc Le (46 papers)
  6. Mahaveer Jain (6 papers)
  7. Kjell Schubert (5 papers)
  8. Christian Fuegen (36 papers)
  9. Michael L. Seltzer (34 papers)
Citations (145)