Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multilingual Graphemic Hybrid ASR with Massive Data Augmentation (1909.06522v3)

Published 14 Sep 2019 in eess.AS, cs.CL, cs.LG, and cs.SD

Abstract: Towards developing high-performing ASR for low-resource languages, approaches to address the lack of resources are to make use of data from multiple languages, and to augment the training data by creating acoustic variations. In this work we present a single grapheme-based ASR model learned on 7 geographically proximal languages, using standard hybrid BLSTM-HMM acoustic models with lattice-free MMI objective. We build the single ASR grapheme set via taking the union over each language-specific grapheme set, and we find such multilingual graphemic hybrid ASR model can perform language-independent recognition on all 7 languages, and substantially outperform each monolingual ASR model. Secondly, we evaluate the efficacy of multiple data augmentation alternatives within language, as well as their complementarity with multilingual modeling. Overall, we show that the proposed multilingual graphemic hybrid ASR with various data augmentation can not only recognize any within training set languages, but also provide large ASR performance improvements.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chunxi Liu (20 papers)
  2. Qiaochu Zhang (3 papers)
  3. Xiaohui Zhang (105 papers)
  4. Kritika Singh (9 papers)
  5. Yatharth Saraf (21 papers)
  6. Geoffrey Zweig (20 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.