Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Progressive Multi-Scale Self-Supervised Learning for Speech Recognition (2212.03480v1)

Published 7 Dec 2022 in eess.AS and cs.SD

Abstract: Self-supervised learning (SSL) models have achieved considerable improvements in automatic speech recognition (ASR). In addition, ASR performance could be further improved if the model is dedicated to audio content information learning theoretically. To this end, we propose a progressive multi-scale self-supervised learning (PMS-SSL) method, which uses fine-grained target sets to compute SSL loss at top layer while uses coarse-grained target sets at intermediate layers. Furthermore, PMS-SSL introduces multi-scale structure into multi-head self-attention for better speech representation, which restricts the attention area into a large scope at higher layers while restricts the attention area into a small scope at lower layers. Experiments on Librispeech dataset indicate the effectiveness of our proposed method. Compared with HuBERT, PMS-SSL achieves 13.7% / 12.7% relative WER reduction on test other evaluation subsets respectively when fine-tuned on 10hours / 100hours subsets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Genshun Wan (10 papers)
  2. Tan Liu (30 papers)
  3. Hang Chen (77 papers)
  4. Jia Pan (127 papers)
  5. Cong Liu (169 papers)
  6. Zhongfu Ye (4 papers)

Summary

We haven't generated a summary for this paper yet.