Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Exploration of Self-Supervised Pretrained Representations for End-to-End Speech Recognition (2110.04590v1)

Published 9 Oct 2021 in cs.CL, cs.SD, and eess.AS

Abstract: Self-supervised pretraining on speech data has achieved a lot of progress. High-fidelity representation of the speech signal is learned from a lot of untranscribed data and shows promising performance. Recently, there are several works focusing on evaluating the quality of self-supervised pretrained representations on various tasks without domain restriction, e.g. SUPERB. However, such evaluations do not provide a comprehensive comparison among many ASR benchmark corpora. In this paper, we focus on the general applications of pretrained speech representations, on advanced end-to-end automatic speech recognition (E2E-ASR) models. We select several pretrained speech representations and present the experimental results on various open-source and publicly available corpora for E2E-ASR. Without any modification of the back-end model architectures or training strategy, some of the experiments with pretrained representations, e.g., WSJ, WSJ0-2mix with HuBERT, reach or outperform current state-of-the-art (SOTA) recognition performance. Moreover, we further explore more scenarios for whether the pretraining representations are effective, such as the cross-language or overlapped speech. The scripts, configuratons and the trained models have been released in ESPnet to let the community reproduce our experiments and improve them.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Xuankai Chang (61 papers)
  2. Takashi Maekaku (9 papers)
  3. Pengcheng Guo (55 papers)
  4. Jing Shi (123 papers)
  5. Yen-Ju Lu (13 papers)
  6. Aswin Shanmugam Subramanian (20 papers)
  7. Tianzi Wang (37 papers)
  8. Shu-wen Yang (17 papers)
  9. Yu Tsao (200 papers)
  10. Hung-yi Lee (327 papers)
  11. Shinji Watanabe (416 papers)
Citations (74)

Summary

We haven't generated a summary for this paper yet.