Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Supervised Contrastive Learning for Accented Speech Recognition (2107.00921v1)

Published 2 Jul 2021 in cs.SD, cs.LG, and eess.AS

Abstract: Neural network based speech recognition systems suffer from performance degradation due to accented speech, especially unfamiliar accents. In this paper, we study the supervised contrastive learning framework for accented speech recognition. To build different views (similar "positive" data samples) for contrastive learning, three data augmentation techniques including noise injection, spectrogram augmentation and TTS-same-sentence generation are further investigated. From the experiments on the Common Voice dataset, we have shown that contrastive learning helps to build data-augmentation invariant and pronunciation invariant representations, which significantly outperforms traditional joint training methods in both zero-shot and full-shot settings. Experiments show that contrastive learning can improve accuracy by 3.66% (zero-shot) and 3.78% (full-shot) on average, comparing to the joint training method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Tao Han (233 papers)
  2. Hantao Huang (7 papers)
  3. Ziang Yang (4 papers)
  4. Wei Han (202 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.