Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Advancing Multi-Accented LSTM-CTC Speech Recognition using a Domain Specific Student-Teacher Learning Paradigm (1809.06833v3)

Published 18 Sep 2018 in eess.AS

Abstract: Non-native speech causes automatic speech recognition systems to degrade in performance. Past strategies to address this challenge have considered model adaptation, accent classification with a model selection, alternate pronunciation lexicon, etc. In this study, we consider a recurrent neural network (RNN) with connectionist temporal classification (CTC) cost function trained on multi-accent English data including US (Native), Indian and Hispanic accents. We exploit dark knowledge from a model trained with the multi-accent data to train student models under the guidance of both a teacher model and CTC cost of target transcription. We show that transferring knowledge from a single RNN-CTC trained model toward a student model, yields better performance than the stand-alone teacher model. Since the outputs of different trained CTC models are not necessarily aligned, it is not possible to simply use an ensemble of CTC teacher models. To address this problem, we train accent specific models under the guidance of a single multi-accent teacher, which results in having multiple aligned and trained CTC models. Furthermore, we train a student model under the supervision of the accent-specific teachers, resulting in an even further complementary model, which achieves +20.1% relative Character Error Rate (CER) reduction compared to the baseline trained without any teacher. Having this effective multi-accent model, we can achieve further improvement for each accent by adapting the model to each accent. Using the accent specific model's outputs to regularize the adapting process (i.e., a knowledge distillation version of Kullback-Leibler (KL) divergence) results in even superior performance compared to the conventional approach using general teacher models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Shahram Ghorbani (7 papers)
  2. Ahmet E. Bulut (3 papers)
  3. John H. L. Hansen (58 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.