Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Further Study of Unsupervised Pre-training for Transformer Based Speech Recognition (2005.09862v2)

Published 20 May 2020 in eess.AS, cs.CL, and cs.SD

Abstract: Building a good speech recognition system usually requires large amounts of transcribed data, which is expensive to collect. To tackle this problem, many unsupervised pre-training methods have been proposed. Among these methods, Masked Predictive Coding achieved significant improvements on various speech recognition datasets with BERT-like Masked Reconstruction loss and Transformer backbone. However, many aspects of MPC have not been fully investigated. In this paper, we conduct a further study on MPC and focus on three important aspects: the effect of pre-training data speaking style, its extension on streaming model, and how to better transfer learned knowledge from pre-training stage to downstream tasks. Experiments reveled that pre-training data with matching speaking style is more useful on downstream recognition tasks. A unified training objective with APC and MPC provided 8.46% relative error reduction on streaming model trained on HKUST. Also, the combination of target data adaption and layer-wise discriminative training helped the knowledge transfer of MPC, which achieved 3.99% relative error reduction on AISHELL over a strong baseline.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Dongwei Jiang (16 papers)
  2. Wubo Li (8 papers)
  3. Ruixiong Zhang (10 papers)
  4. Miao Cao (13 papers)
  5. Ne Luo (5 papers)
  6. Yang Han (62 papers)
  7. Wei Zou (62 papers)
  8. Xiangang Li (47 papers)
Citations (28)

Summary

We haven't generated a summary for this paper yet.