Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Semantic Mask for Transformer based End-to-End Speech Recognition (1912.03010v2)

Published 6 Dec 2019 in cs.CL, cs.SD, and eess.AS

Abstract: Attention-based encoder-decoder model has achieved impressive results for both automatic speech recognition (ASR) and text-to-speech (TTS) tasks. This approach takes advantage of the memorization capacity of neural networks to learn the mapping from the input sequence to the output sequence from scratch, without the assumption of prior knowledge such as the alignments. However, this model is prone to overfitting, especially when the amount of training data is limited. Inspired by SpecAugment and BERT, in this paper, we propose a semantic mask based regularization for training such kind of end-to-end (E2E) model. The idea is to mask the input features corresponding to a particular output token, e.g., a word or a word-piece, in order to encourage the model to fill the token based on the contextual information. While this approach is applicable to the encoder-decoder framework with any type of neural network architecture, we study the transformer-based model for ASR in this work. We perform experiments on Librispeech 960h and TedLium2 data sets, and achieve the state-of-the-art performance on the test set in the scope of E2E models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Chengyi Wang (32 papers)
  2. Yu Wu (196 papers)
  3. Yujiao Du (2 papers)
  4. Jinyu Li (164 papers)
  5. Shujie Liu (101 papers)
  6. Liang Lu (42 papers)
  7. Shuo Ren (22 papers)
  8. Guoli Ye (15 papers)
  9. Sheng Zhao (75 papers)
  10. Ming Zhou (182 papers)
Citations (51)

Summary

We haven't generated a summary for this paper yet.