Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Scaling Contrastive Representations for Low-Resource Speech Recognition (2102.00850v1)

Published 1 Feb 2021 in eess.AS, cs.LG, and cs.SD

Abstract: Recent advances in self-supervised learning through contrastive training have shown that it is possible to learn a competitive speech recognition system with as little as 10 minutes of labeled data. However, these systems are computationally expensive since they require pre-training followed by fine-tuning in a large parameter space. We explore the performance of such systems without fine-tuning by training a state-of-the-art speech recognizer on the fixed representations from the computationally demanding wav2vec 2.0 framework. We find performance to decrease without fine-tuning and, in the extreme low-resource setting, wav2vec 2.0 is inferior to its predecessor. In addition, we find that wav2vec 2.0 representations live in a low dimensional subspace and that decorrelating the features of the representations can stabilize training of the automatic speech recognizer. Finally, we propose a bidirectional extension to the original wav2vec framework that consistently improves performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Lasse Borgholt (11 papers)
  2. Tycho Max Sylvester Tax (3 papers)
  3. Jakob Drachmann Havtorn (5 papers)
  4. Lars Maaløe (23 papers)
  5. Christian Igel (47 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.