Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scaling End-to-End Models for Large-Scale Multilingual ASR (2104.14830v2)

Published 30 Apr 2021 in cs.CL, cs.SD, and eess.AS

Abstract: Building ASR models across many languages is a challenging multi-task learning problem due to large variations and heavily unbalanced data. Existing work has shown positive transfer from high resource to low resource languages. However, degradations on high resource languages are commonly observed due to interference from the heterogeneous multilingual data and reduction in per-language capacity. We conduct a capacity study on a 15-language task, with the amount of data per language varying from 7.6K to 53.5K hours. We adopt GShard [1] to efficiently scale up to 10B parameters. Empirically, we find that (1) scaling the number of model parameters is an effective way to solve the capacity bottleneck - our 500M-param model already outperforms monolingual baselines and scaling it to 1B and 10B brought further quality gains; (2) larger models are not only more data efficient, but also more efficient in terms of training cost as measured in TPU days - the 1B-param model reaches the same accuracy at 34% of training time as the 500M-param model; (3) given a fixed capacity budget, adding depth works better than width and large encoders do better than large decoders; (4) with continuous training, they can be adapted to new languages and domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Bo Li (1108 papers)
  2. Ruoming Pang (59 papers)
  3. Tara N. Sainath (79 papers)
  4. Anmol Gulati (13 papers)
  5. Yu Zhang (1403 papers)
  6. James Qin (20 papers)
  7. Parisa Haghani (15 papers)
  8. W. Ronny Huang (25 papers)
  9. Min Ma (14 papers)
  10. Junwen Bai (20 papers)
Citations (74)

Summary

We haven't generated a summary for this paper yet.