Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Comparison of Multilingual Self-Supervised and Weakly-Supervised Speech Pre-Training for Adaptation to Unseen Languages (2305.12606v2)

Published 21 May 2023 in cs.CL, cs.SD, and eess.AS

Abstract: Recent models such as XLS-R and Whisper have made multilingual speech technologies more accessible by pre-training on audio from around 100 spoken languages each. However, there are thousands of spoken languages worldwide, and adapting to new languages is an important problem. In this work, we aim to understand which model adapts better to languages unseen during pre-training. We fine-tune both models on 13 unseen languages and 18 seen languages. Our results show that the number of hours seen per language and language family during pre-training is predictive of how the models compare, despite the significant differences in the pre-training methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Andrew Rouditchenko (21 papers)
  2. Sameer Khurana (26 papers)
  3. Samuel Thomas (42 papers)
  4. Rogerio Feris (105 papers)
  5. Leonid Karlinsky (79 papers)
  6. Hilde Kuehne (69 papers)
  7. David Harwath (55 papers)
  8. Brian Kingsbury (54 papers)
  9. James Glass (173 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.