Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

IndicVoices: Towards building an Inclusive Multilingual Speech Dataset for Indian Languages (2403.01926v1)

Published 4 Mar 2024 in cs.CL

Abstract: We present INDICVOICES, a dataset of natural and spontaneous speech containing a total of 7348 hours of read (9%), extempore (74%) and conversational (17%) audio from 16237 speakers covering 145 Indian districts and 22 languages. Of these 7348 hours, 1639 hours have already been transcribed, with a median of 73 hours per language. Through this paper, we share our journey of capturing the cultural, linguistic and demographic diversity of India to create a one-of-its-kind inclusive and representative dataset. More specifically, we share an open-source blueprint for data collection at scale comprising of standardised protocols, centralised tools, a repository of engaging questions, prompts and conversation scenarios spanning multiple domains and topics of interest, quality control mechanisms, comprehensive transcription guidelines and transcription tools. We hope that this open source blueprint will serve as a comprehensive starter kit for data collection efforts in other multilingual regions of the world. Using INDICVOICES, we build IndicASR, the first ASR model to support all the 22 languages listed in the 8th schedule of the Constitution of India. All the data, tools, guidelines, models and other materials developed as a part of this work will be made publicly available

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. A. Radford, J. W. Kim, T. Xu, G. Brockman, C. McLeavey, and I. Sutskever, “Robust speech recognition via large-scale weak supervision,” in Proceedings of the 40th International Conference on Machine Learning, ICML’23, JMLR.org, 2023.
  2. A. Baevski, Y. Zhou, A. Mohamed, and M. Auli, “wav2vec 2.0: A framework for self-supervised learning of speech representations,” in Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual (H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, eds.), 2020.
  3. A. Baevski, W. Hsu, Q. Xu, A. Babu, J. Gu, and M. Auli, “data2vec: A general framework for self-supervised learning in speech, vision and language,” in International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA (K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvári, G. Niu, and S. Sabato, eds.), vol. 162 of Proceedings of Machine Learning Research, pp. 1298–1312, PMLR, 2022.
  4. S. Chen, C. Wang, Z. Chen, Y. Wu, S. Liu, Z. Chen, J. Li, N. Kanda, T. Yoshioka, X. Xiao, J. Wu, L. Zhou, S. Ren, Y. Qian, Y. Qian, J. Wu, M. Zeng, X. Yu, and F. Wei, “Wavlm: Large-scale self-supervised pre-training for full stack speech processing,” IEEE J. Sel. Top. Signal Process., vol. 16, no. 6, pp. 1505–1518, 2022.
  5. A. Gulati, J. Qin, C. Chiu, N. Parmar, Y. Zhang, J. Yu, W. Han, S. Wang, Z. Zhang, Y. Wu, and R. Pang, “Conformer: Convolution-augmented transformer for speech recognition,” in Interspeech 2020, 21st Annual Conference of the International Speech Communication Association, Virtual Event, Shanghai, China, 25-29 October 2020 (H. Meng, B. Xu, and T. F. Zheng, eds.), pp. 5036–5040, ISCA, 2020.
  6. V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, “Librispeech: An asr corpus based on public domain audio books,” in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5206–5210, 2015.
  7. R. Ardila, M. Branson, K. Davis, M. Henretty, M. Kohler, J. Meyer, R. Morais, L. Saunders, F. M. Tyers, and G. Weber, “Common voice: A massively-multilingual speech corpus,” 2020.
  8. Springer International Publishing, 2018.
  9. C. Wang, M. Rivière, A. Lee, A. Wu, C. Talnikar, D. Haziza, M. Williamson, J. Pino, and E. Dupoux, “Voxpopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation,” 2021.
  10. T. Javed, S. Joshi, V. Nagarajan, S. Sundaresan, J. Nawale, A. Raman, K. Bhogale, P. Kumar, and M. M. Khapra, “Svarah: Evaluating English ASR Systems on Indian Accents,” in Proc. INTERSPEECH 2023, pp. 5087–5091, 2023.
  11. Y. Zhang, W. Han, J. Qin, Y. Wang, A. Bapna, Z. Chen, N. Chen, B. Li, V. Axelrod, G. Wang, Z. Meng, K. Hu, A. Rosenberg, R. Prabhavalkar, D. S. Park, P. Haghani, J. Riesa, G. Perng, H. Soltau, T. Strohman, B. Ramabhadran, T. Sainath, P. Moreno, C.-C. Chiu, J. Schalkwyk, F. Beaufays, and Y. Wu, “Google usm: Scaling automatic speech recognition beyond 100 languages,” 2023.
  12. V. Pratap, A. Tjandra, B. Shi, P. Tomasello, A. Babu, S. Kundu, A. Elkahky, Z. Ni, A. Vyas, M. Fazel-Zarandi, A. Baevski, Y. Adi, X. Zhang, W. Hsu, A. Conneau, and M. Auli, “Scaling speech technology to 1, 000+ languages,” CoRR, vol. abs/2305.13516, 2023.
  13. T. Javed, S. Doddapaneni, A. Raman, K. S. Bhogale, G. Ramesh, A. Kunchukuttan, P. Kumar, and M. M. Khapra, “Towards building ASR systems for the next billion users,” in Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pp. 10813–10821, AAAI Press, 2022.
  14. P. Duquenne, H. Schwenk, and B. Sagot, “SONAR: sentence-level multimodal and language-agnostic representations,” CoRR, vol. abs/2308.11466, 2023.
  15. K. S. Bhogale, A. Raman, T. Javed, S. Doddapaneni, A. Kunchukuttan, P. Kumar, and M. M. Khapra, “Effectiveness of mining audio and text pairs from public data for improving ASR systems for low-resource languages,” in IEEE International Conference on Acoustics, Speech and Signal Processing ICASSP 2023, Rhodes Island, Greece, June 4-10, 2023, pp. 1–5, IEEE, 2023.
  16. A. Conneau, M. Ma, S. Khanuja, Y. Zhang, V. Axelrod, S. Dalmia, J. Riesa, C. Rivera, and A. Bapna, “Fleurs: Few-shot learning evaluation of universal representations of speech,” 2022.
  17. B. M. L. Srivastava, S. Sitaram, R. Kumar Mehta, K. Doss Mohan, P. Matani, S. Satpal, K. Bali, R. Srikanth, and N. Nayak, “Interspeech 2018 Low Resource Automatic Speech Recognition Challenge for Indian Languages,” in Proc. 6th Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU 2018), pp. 11–14, 2018.
  18. O. Kjartansson, S. Sarin, K. Pipatsrisawat, M. Jansche, and L. Ha, “Crowd-Sourced Speech Corpora for Javanese, Sundanese, Sinhala, Nepali, and Bangladeshi Bengali,” in Proc. The 6th Intl. Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU), (Gurugram, India), pp. 52–55, Aug. 2018.
  19. F. He, S.-H. C. Chu, O. Kjartansson, C. Rivera, A. Katanova, A. Gutkin, I. Demirsahin, C. Johny, M. Jansche, S. Sarin, and K. Pipatsrisawat, “Open-source multi-speaker speech corpora for building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu speech synthesis systems,” in Proceedings of the Twelfth Language Resources and Evaluation Conference (N. Calzolari, F. Béchet, P. Blache, K. Choukri, C. Cieri, T. Declerck, S. Goggi, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, and S. Piperidis, eds.), (Marseille, France), pp. 6494–6503, European Language Resources Association, May 2020.
  20. A. Diwan, R. Vaideeswaran, S. Shah, A. Singh, S. Raghavan, S. Khare, V. Unni, S. Vyas, A. Rajpuria, C. Yarra, A. Mittal, P. K. Ghosh, P. Jyothi, K. Bali, V. Seshadri, S. Sitaram, S. Bharadwaj, J. Nanavati, R. Nanavati, K. Sankaranarayanan, T. Seeram, and B. Abraham, “Multilingual and code-switching asr challenges for low resource indian languages,” Proceedings of Interspeech, 2021.
  21. T. Javed, K. Bhogale, A. Raman, P. Kumar, A. Kunchukuttan, and M. M. Khapra, “Indicsuperb: a speech processing universal performance benchmark for indian languages,” in Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence, AAAI’23/IAAI’23/EAAI’23, AAAI Press, 2023.
  22. K. Bhogale, A. Raman, T. Javed, S. Doddapaneni, A. Kunchukuttan, P. Kumar, and M. M. Khapra, “Effectiveness of mining audio and text pairs from public data for improving asr systems for low-resource languages,” in ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1–5, 2023.
  23. M. A, B. Pilar, and R. A. G, “Subword dictionary learning and segmentation techniques for automatic speech recognition in tamil and kannada,” 2022.
  24. M. A, B. Pilar, and R. A. G, “Knowledge-driven subword grammar modeling for automatic speech recognition in tamil and kannada,” 2022.
  25. D. Adiga, R. Kumar, A. Krishna, P. Jyothi, G. Ramakrishnan, and P. Goyal, “Automatic speech recognition in Sanskrit: A new speech corpus and modelling insights,” in Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (C. Zong, F. Xia, W. Li, and R. Navigli, eds.), (Online), pp. 5039–5050, Association for Computational Linguistics, Aug. 2021.
  26. K. Prahallad, N. K. Elluru, V. Keri, S. Rajendran, and A. W. Black, “The iiit-h indic speech databases,” in Interspeech, 2012.
  27. B. Abraham, D. Goel, D. Siddarth, K. Bali, M. Chopra, M. Choudhury, P. Joshi, P. Jyothi, S. Sitaram, and V. Seshadri, “Crowdsourcing speech data for low-resource languages from low-income workers,” in Proceedings of the 12th Conference on Language Resources and Evaluation (LREC), pp. 2819–2826, 2020.
  28. K. S. Bhogale, S. Sundaresan, A. Raman, T. Javed, M. M. Khapra, and P. Kumar, “Vistaar: Diverse benchmarks and training sets for indian language asr,” ArXiv, vol. abs/2305.15386, 2023.
  29. “Resources for indian languages,” 2016.
  30. N. R, M. S, J. F, A. Gangwar, M. N. J, S. Umesh, R. Sarab, A. K. Dubey, G. Divakaran, S. V. K, and S. V. Gangashetty, “Spring-inx: A multilingual indian language speech corpus by spring lab, iit madras,” 2023.
  31. A. Singh, C. Shah, R. Varadaraj, S. Chauhan, and P. K. Ghosh, “Spire-sies: A spontaneous indian english speech corpus,” 2023.
  32. J. FitzGerald, C. Hench, C. Peris, S. Mackie, K. Rottmann, A. Sanchez, A. Nash, L. Urbach, V. Kakarala, R. Singh, S. Ranganath, L. Crist, M. Britan, W. Leeuwis, G. Tur, and P. Natarajan, “Massive: A 1m-example multilingual natural language understanding dataset with 51 typologically-diverse languages,” 2022.
  33. M. Chopra, I. Medhi Thies, J. Pal, C. Scott, W. Thies, and V. Seshadri, “Exploring crowdsourced work in low-resource settings,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19, (New York, NY, USA), p. 1–13, Association for Computing Machinery, 2019.
  34. S. Team, “Silero vad: pre-trained enterprise-grade voice activity detector (vad), number detector and language classifier.” https://github.com/snakers4/silero-vad, 2021.
  35. AI4Bharat, “Shoonya: An open source platform to annotate and label data at scale,” 2023.
  36. A. Tjandra, N. Singhal, D. Zhang, O. Kalinli, A. Mohamed, D. Le, and M. L. Seltzer, “Massively multilingual ASR on 70 languages: Tokenization, architecture, and generalization capabilities,” in IEEE International Conference on Acoustics, Speech and Signal Processing ICASSP 2023, Rhodes Island, Greece, June 4-10, 2023, pp. 1–5, IEEE, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (21)
Citations (11)

Summary

IndicVoices: Towards an Inclusive Multilingual Speech Dataset for Indian Languages

Introduction

The paper introduces IndicVoices, a comprehensive dataset encapsulating the linguistic, cultural, and demographic diversity of India, spanning 22 languages across 145 districts with contributions from 16,237 speakers. This initiative addresses the critical gap in labeled data for Indian languages, which has historically impeded the performance of Automatic Speech Recognition (ASR) technologies in non-English languages. The dataset, with a total of 7348 hours of audio data, predominantly comprises extempore (74%) and conversational (17%) speech, offering a rich resource for developing inclusive language technologies.

The Dataset's Composition and Collection Process

The paper delineates the meticulous process of dataset creation, emphasizing the commitment to capturing the multifaceted diversity of India. The authors crafted a dataset reflecting varied demographics (age, gender, educational background), types of speech (read, extempore, conversational), and recording conditions (diverse environments, wide/narrow-band recordings). A pivotal component of their methodology was the development of a centralized, open-source blueprint for scalable data collection. This framework facilitated the structured collection of spontaneous speech data reflecting real-world usage scenarios, thereby enhancing the dataset's applicability for practical ASR applications.

Comparison with Existing Datasets

IndicVoices distinguishes itself by its sheer scale and scope - covering 22 languages and providing extensive hours of transcribed speech, far surpassing existing datasets in terms of linguistic and demographic diversity. This breadth ensures a more holistic representation of India's linguistic landscape, making it an unparalleled resource for training robust, inclusive ASR models.

ASR Model Development and Benchmarking

Utilizing IndicVoices, the authors developed IndicASR, a pioneering ASR model supporting all 22 languages in the dataset. Initial benchmarking shows that IndicASR significantly outperforms existing models, underscoring the dataset's effectiveness in enhancing ASR performance for Indian languages. This model sets a new standard for speech recognition accuracy and inclusivity, demonstrating the potential of well-curated, diverse datasets in advancing language technologies.

Practical and Theoretical Implications

Beyond ASR, the dataset's structure and comprehensiveness offer vast potential for exploring several other speech and language processing tasks such as speaker diarization, language identification, and query by example. The open availability of IndicVoices and the accompanying tools and guidelines are poised to catalyze further research, making significant strides towards digital inclusivity and the development of speech technologies that cater to India's linguistic diversity.

Future Directions

The authors acknowledge certain limitations, such as the coverage of districts and the representation of conversational speech. Addressing these aspects in future iterations could further enhance the dataset's utility. Moreover, the ongoing collection and transcription efforts aim to expand the dataset, and subsequent work could focus on a more detailed evaluation across varied demographics and use cases. The development of IndicVoices is a stepping stone towards realizing the vision of truly inclusive speech technologies, opening avenues for multilingual research and applications.

Concluding Remarks

IndicVoices represents a significant contribution to the field of speech technology, particularly for the underrepresented languages of India. By facilitating the development of more accurate and inclusive ASR models, this work paves the way for greater digital accessibility and equity. Future research and innovations leveraging this dataset have the potential to transform the landscape of speech technology, making digital services more accessible to the linguistically diverse population of India.