An Exploration of Self-Supervised Pretrained Representations for End-to-End Speech Recognition (2110.04590v1)
Abstract: Self-supervised pretraining on speech data has achieved a lot of progress. High-fidelity representation of the speech signal is learned from a lot of untranscribed data and shows promising performance. Recently, there are several works focusing on evaluating the quality of self-supervised pretrained representations on various tasks without domain restriction, e.g. SUPERB. However, such evaluations do not provide a comprehensive comparison among many ASR benchmark corpora. In this paper, we focus on the general applications of pretrained speech representations, on advanced end-to-end automatic speech recognition (E2E-ASR) models. We select several pretrained speech representations and present the experimental results on various open-source and publicly available corpora for E2E-ASR. Without any modification of the back-end model architectures or training strategy, some of the experiments with pretrained representations, e.g., WSJ, WSJ0-2mix with HuBERT, reach or outperform current state-of-the-art (SOTA) recognition performance. Moreover, we further explore more scenarios for whether the pretraining representations are effective, such as the cross-language or overlapped speech. The scripts, configuratons and the trained models have been released in ESPnet to let the community reproduce our experiments and improve them.
- Xuankai Chang (61 papers)
- Takashi Maekaku (9 papers)
- Pengcheng Guo (55 papers)
- Jing Shi (123 papers)
- Yen-Ju Lu (13 papers)
- Aswin Shanmugam Subramanian (20 papers)
- Tianzi Wang (37 papers)
- Shu-wen Yang (17 papers)
- Yu Tsao (200 papers)
- Hung-yi Lee (327 papers)
- Shinji Watanabe (416 papers)