A Further Study of Unsupervised Pre-training for Transformer Based Speech Recognition (2005.09862v2)
Abstract: Building a good speech recognition system usually requires large amounts of transcribed data, which is expensive to collect. To tackle this problem, many unsupervised pre-training methods have been proposed. Among these methods, Masked Predictive Coding achieved significant improvements on various speech recognition datasets with BERT-like Masked Reconstruction loss and Transformer backbone. However, many aspects of MPC have not been fully investigated. In this paper, we conduct a further study on MPC and focus on three important aspects: the effect of pre-training data speaking style, its extension on streaming model, and how to better transfer learned knowledge from pre-training stage to downstream tasks. Experiments reveled that pre-training data with matching speaking style is more useful on downstream recognition tasks. A unified training objective with APC and MPC provided 8.46% relative error reduction on streaming model trained on HKUST. Also, the combination of target data adaption and layer-wise discriminative training helped the knowledge transfer of MPC, which achieved 3.99% relative error reduction on AISHELL over a strong baseline.
- Dongwei Jiang (16 papers)
- Wubo Li (8 papers)
- Ruixiong Zhang (10 papers)
- Miao Cao (13 papers)
- Ne Luo (5 papers)
- Yang Han (62 papers)
- Wei Zou (62 papers)
- Xiangang Li (47 papers)