Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Data Augmentation for Low-Resource Quechua ASR Improvement (2207.06872v1)

Published 14 Jul 2022 in cs.SD, cs.CL, and eess.AS

Abstract: Automatic Speech Recognition (ASR) is a key element in new services that helps users to interact with an automated system. Deep learning methods have made it possible to deploy systems with word error rates below 5% for ASR of English. However, the use of these methods is only available for languages with hundreds or thousands of hours of audio and their corresponding transcriptions. For the so-called low-resource languages to speed up the availability of resources that can improve the performance of their ASR systems, methods of creating new resources on the basis of existing ones are being investigated. In this paper we describe our data augmentation approach to improve the results of ASR models for low-resource and agglutinative languages. We carry out experiments developing an ASR for Quechua using the wav2letter++ model. We reduced WER by 8.73% through our approach to the base model. The resulting ASR model obtained 22.75% WER and was trained with 99 hours of original resources and 99 hours of synthetic data obtained with a combination of text augmentation and synthetic speech generati

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Rodolfo Zevallos (7 papers)
  2. Guillermo Cámbara (9 papers)
  3. Mireia Farrús (10 papers)
  4. Jordi Luque (19 papers)
  5. Nuria Bel (1 paper)
Citations (4)

Summary

We haven't generated a summary for this paper yet.