Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Vistaar: Diverse Benchmarks and Training Sets for Indian Language ASR (2305.15386v2)

Published 24 May 2023 in cs.CL, cs.SD, and eess.AS

Abstract: Improving ASR systems is necessary to make new LLM-based use-cases accessible to people across the globe. In this paper, we focus on Indian languages, and make the case that diverse benchmarks are required to evaluate and improve ASR systems for Indian languages. To address this, we collate Vistaar as a set of 59 benchmarks across various language and domain combinations, on which we evaluate 3 publicly available ASR systems and 2 commercial systems. We also train IndicWhisper models by fine-tuning the Whisper models on publicly available training datasets across 12 Indian languages totalling to 10.7K hours. We show that IndicWhisper significantly improves on considered ASR systems on the Vistaar benchmark. Indeed, IndicWhisper has the lowest WER in 39 out of the 59 benchmarks, with an average reduction of 4.1 WER. We open-source all datasets, code and models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kaushal Santosh Bhogale (6 papers)
  2. Sai Sundaresan (3 papers)
  3. Abhigyan Raman (5 papers)
  4. Tahir Javed (9 papers)
  5. Mitesh M. Khapra (79 papers)
  6. Pratyush Kumar (44 papers)
Citations (8)

Summary

Vistaar: Diverse Benchmarks and Training Sets for Indian Language ASR

The paper "Vistaar: Diverse Benchmarks and Training Sets for Indian Language ASR" by Kaushal Santosh Bhogale et al. addresses critical improvements in Automatic Speech Recognition (ASR) systems within the context of Indian languages. With a significant portion of the Indian population being print illiterate and the nation's linguistic diversity, the development of accurate ASR systems becomes significantly impactful. This paper proposes Vistaar, a collection of diverse benchmarks, and introduces IndicWhisper, a family of ASR models fine-tuned for Indian languages.

Key Contributions

  1. Vistaar Benchmark Compilation: The authors have curated a set of 59 benchmarks across 12 Indian languages and various domains/data types to evaluate ASR systems. This includes datasets like Kathbath, CommonVoice, FLEURS, and others. These benchmarks reveal the diversity in speakers, audio collection environments, and domains, encompassing both studio-quality and crowd-sourced audio data.
  2. IndicWhisper ASR Models: By fine-tuning OpenAI's Whisper models on a comprehensive set of training data termed Vistaar-Train, the paper introduces IndicWhisper. This training set aggregates over 10,000 hours of audio across 12 languages, from sources like Shrutilipi, NPTEL, and IndicTTS, among others.
  3. Evaluation Results: IndicWhisper models exhibit superior performance against publicly available models like IndicWav2Vec and commercial ASR systems such as Google and Azure. IndicWhisper achieves the lowest Word Error Rate (WER) in 39 out of the 59 benchmarks.

Detailed Analysis

The paper presents a meticulous evaluation of existing ASR systems using the Vistaar benchmark. Results show noticeable discrepancies among ASR models, with IndicWhisper models outperforming others by significant margins, especially in challenging acoustic environments. The empirical results highlight the variability in ASR performance depending heavily on the dataset used, indicating that reliance on a singular benchmark might misrepresent a model's effectiveness across different conditions and languages.

Implications and Future Directions

This research has far-reaching implications in enhancing accessibility and user interaction with technology for non-English speakers through robust ASR systems. The successful development and deployment of such systems could lead to substantial societal impacts, potentially transforming how information and services are accessed in linguistically diverse regions.

Future work should explore:

  • Expanding training datasets to include even more linguistic variety,
  • Developing strategies to balance ASR accuracy across different languages with varied amounts of training data,
  • Implementing domain-specific LLMs that can complement generic acoustic models, thereby improving performance in specialized applications.

In conclusion, the paper provides a substantial contribution to the field of ASR for low-resource languages by establishing a well-rounded benchmark and demonstrating the benefits of diverse training datasets through the IndicWhisper models. The complete open-sourcing of the datasets, code, and models enhances reproducibility and further research potential in this domain.

Youtube Logo Streamline Icon: https://streamlinehq.com