Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Noise Robustness of Contrastive Speech Representation Learning with Speech Reconstruction (2110.15430v1)

Published 28 Oct 2021 in cs.SD, cs.CL, and eess.AS

Abstract: Noise robustness is essential for deploying automatic speech recognition (ASR) systems in real-world environments. One way to reduce the effect of noise interference is to employ a preprocessing module that conducts speech enhancement, and then feed the enhanced speech to an ASR backend. In this work, instead of suppressing background noise with a conventional cascaded pipeline, we employ a noise-robust representation learned by a refined self-supervised framework for noisy speech recognition. We propose to combine a reconstruction module with contrastive learning and perform multi-task continual pre-training on noisy data. The reconstruction module is used for auxiliary learning to improve the noise robustness of the learned representation and thus is not required during inference. Experiments demonstrate the effectiveness of our proposed method. Our model substantially reduces the word error rate (WER) for the synthesized noisy LibriSpeech test sets, and yields around 4.1/7.5% WER reduction on noisy clean/other test sets compared to data augmentation. For the real-world noisy speech from the CHiME-4 challenge (1-channel track), we have obtained the state of the art ASR performance without any denoising front-end. Moreover, we achieve comparable performance to the best supervised approach reported with only 16% of labeled data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Heming Wang (45 papers)
  2. Yao Qian (37 papers)
  3. Xiaofei Wang (138 papers)
  4. Yiming Wang (141 papers)
  5. Chengyi Wang (32 papers)
  6. Shujie Liu (101 papers)
  7. Takuya Yoshioka (77 papers)
  8. Jinyu Li (164 papers)
  9. DeLiang Wang (43 papers)
Citations (27)