Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distilling Knowledge Using Parallel Data for Far-field Speech Recognition (1802.06941v1)

Published 20 Feb 2018 in cs.CL, cs.SD, and eess.AS

Abstract: In order to improve the performance for far-field speech recognition, this paper proposes to distill knowledge from the close-talking model to the far-field model using parallel data. The close-talking model is called the teacher model. The far-field model is called the student model. The student model is trained to imitate the output distributions of the teacher model. This constraint can be realized by minimizing the Kullback-Leibler (KL) divergence between the output distribution of the student model and the teacher model. Experimental results on AMI corpus show that the best student model achieves up to 4.7% absolute word error rate (WER) reduction when compared with the conventionally-trained baseline models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jiangyan Yi (77 papers)
  2. Jianhua Tao (139 papers)
  3. Zhengqi Wen (69 papers)
  4. Bin Liu (441 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.