Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Development of Automatic Speech Recognition for Kazakh Language using Transfer Learning (2003.04710v1)

Published 8 Mar 2020 in eess.AS and cs.SD

Abstract: Development of Automatic Speech Recognition system for Kazakh language is very challenging due to a lack of data.Existing data of kazakh speech with its corresponding transcriptions are heavily accessed and not enough to gain a worth mentioning results.For this reason, speech recognition of Kazakh language has not been explored well.There are only few works that investigate this area with traditional methods Hidden Markov Model, Gaussian Mixture Model, but they are suffering from poor outcome and lack of enough data.In our work we suggest a new method that takes pre-trained model of Russian language and applies its knowledge as a starting point to our neural network structure, which means that we are transferring the weights of pre-trained model to our neural network.The main reason we chose Russian model is that pronunciation of kazakh and russian languages are quite similar because they share 78 percent letters and there are quite large corpus of russian speech dataset. We have collected a dataset of Kazakh speech with transcriptions in the base of Suleyman Demirel University with 50 native speakers each having around 400 sentences.Data have been chosen from famous Kazakh books. We have considered 4 different scenarios in our experiment. First, we trained our neural network without using a pre-trained Russian model with 2 LSTM layers and 2 BiLSTM .Second, we have trained the same 2 LSTM layered and 2 BiLSTM layered using a pre-trained model. As a result, we have improved our models training cost and Label Error Rate by using external Russian speech recognition model up to 24 percent and 32 percent respectively.Pre-trained Russian LLM has trained on 100 hours of data with the same neural network architecture.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Amirgaliyev E. N. (1 paper)
  2. Kuanyshbay D. N. (1 paper)
  3. Baimuratov O (1 paper)
Citations (13)

Summary

We haven't generated a summary for this paper yet.