Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Joint Masked CPC and CTC Training for ASR (2011.00093v2)

Published 30 Oct 2020 in cs.CL, cs.LG, and cs.SD

Abstract: Self-supervised learning (SSL) has shown promise in learning representations of audio that are useful for automatic speech recognition (ASR). But, training SSL models like wav2vec~2.0 requires a two-stage pipeline. In this paper we demonstrate a single-stage training of ASR models that can utilize both unlabeled and labeled data. During training, we alternately minimize two losses: an unsupervised masked Contrastive Predictive Coding (CPC) loss and the supervised audio-to-text alignment loss Connectionist Temporal Classification (CTC). We show that this joint training method directly optimizes performance for the downstream ASR task using unsupervised data while achieving similar word error rates to wav2vec~2.0 on the Librispeech 100-hour dataset. Finally, we postulate that solving the contrastive task is a regularization for the supervised CTC loss.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Chaitanya Talnikar (9 papers)
  2. Tatiana Likhomanenko (41 papers)
  3. Ronan Collobert (55 papers)
  4. Gabriel Synnaeve (97 papers)
Citations (27)

Summary

We haven't generated a summary for this paper yet.