Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Massively Multilingual ASR: 50 Languages, 1 Model, 1 Billion Parameters (2007.03001v2)

Published 6 Jul 2020 in eess.AS, cs.CL, and cs.SD

Abstract: We study training a single acoustic model for multiple languages with the aim of improving automatic speech recognition (ASR) performance on low-resource languages, and over-all simplifying deployment of ASR systems that support diverse languages. We perform an extensive benchmark on 51 languages, with varying amount of training data by language(from 100 hours to 1100 hours). We compare three variants of multilingual training from a single joint model without knowing the input language, to using this information, to multiple heads (one per language cluster). We show that multilingual training of ASR models on several languages can improve recognition performance, in particular, on low resource languages. We see 20.9%, 23% and 28.8% average WER relative reduction compared to monolingual baselines on joint model, joint model with language input and multi head model respectively. To our knowledge, this is the first work studying multilingual ASR at massive scale, with more than 50 languages and more than 16,000 hours of audio across them.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Vineel Pratap (18 papers)
  2. Anuroop Sriram (32 papers)
  3. Paden Tomasello (17 papers)
  4. Awni Hannun (33 papers)
  5. Vitaliy Liptchinsky (12 papers)
  6. Gabriel Synnaeve (97 papers)
  7. Ronan Collobert (55 papers)
Citations (137)

Summary

We haven't generated a summary for this paper yet.