2000 character limit reached
Pushing the Limits of Non-Autoregressive Speech Recognition (2104.03416v4)
Published 7 Apr 2021 in eess.AS, cs.CL, cs.LG, and cs.SD
Abstract: We combine recent advancements in end-to-end speech recognition to non-autoregressive automatic speech recognition. We push the limits of non-autoregressive state-of-the-art results for multiple datasets: LibriSpeech, Fisher+Switchboard and Wall Street Journal. Key to our recipe, we leverage CTC on giant Conformer neural network architectures with SpecAugment and wav2vec2 pre-training. We achieve 1.8%/3.6% WER on LibriSpeech test/test-other sets, 5.1%/9.8% WER on Switchboard, and 3.4% on the Wall Street Journal, all without a LLM.
- Edwin G. Ng (4 papers)
- Chung-Cheng Chiu (48 papers)
- Yu Zhang (1400 papers)
- William Chan (54 papers)