Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Applying Wav2vec2.0 to Speech Recognition in Various Low-resource Languages (2012.12121v2)

Published 22 Dec 2020 in cs.CL

Abstract: There are several domains that own corresponding widely used feature extractors, such as ResNet, BERT, and GPT-x. These models are usually pre-trained on large amounts of unlabeled data by self-supervision and can be effectively applied to downstream tasks. In the speech domain, wav2vec2.0 starts to show its powerful representation ability and feasibility of ultra-low resource speech recognition on the Librispeech corpus, which belongs to the audiobook domain. However, wav2vec2.0 has not been examined on real spoken scenarios and languages other than English. To verify its universality over languages, we apply pre-trained models to solve low-resource speech recognition tasks in various spoken languages. We achieve more than 20% relative improvements in six languages compared with previous work. Among these languages, English achieves a gain of 52.4%. Moreover, using coarse-grained modeling units, such as subword or character, achieves better results than fine-grained modeling units, such as phone or letter.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Cheng Yi (5 papers)
  2. Jianzhong Wang (8 papers)
  3. Ning Cheng (96 papers)
  4. Shiyu Zhou (32 papers)
  5. Bo Xu (212 papers)
Citations (73)

Summary

We haven't generated a summary for this paper yet.