Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Wav2vec-C: A Self-supervised Model for Speech Representation Learning (2103.08393v2)

Published 9 Mar 2021 in eess.AS, cs.LG, and cs.SD

Abstract: Wav2vec-C introduces a novel representation learning technique combining elements from wav2vec 2.0 and VQ-VAE. Our model learns to reproduce quantized representations from partially masked speech encoding using a contrastive loss in a way similar to Wav2vec 2.0. However, the quantization process is regularized by an additional consistency network that learns to reconstruct the input features to the wav2vec 2.0 network from the quantized representations in a way similar to a VQ-VAE model. The proposed self-supervised model is trained on 10k hours of unlabeled data and subsequently used as the speech encoder in a RNN-T ASR model and fine-tuned with 1k hours of labeled data. This work is one of only a few studies of self-supervised learning on speech tasks with a large volume of real far-field labeled data. The Wav2vec-C encoded representations achieves, on average, twice the error reduction over baseline and a higher codebook utilization in comparison to wav2vec 2.0

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Samik Sadhu (8 papers)
  2. Di He (108 papers)
  3. Che-Wei Huang (8 papers)
  4. Sri Harish Mallidi (7 papers)
  5. Minhua Wu (12 papers)
  6. Ariya Rastrow (55 papers)
  7. Andreas Stolcke (57 papers)
  8. Jasha Droppo (24 papers)
  9. Roland Maas (24 papers)
Citations (45)

Summary

We haven't generated a summary for this paper yet.