Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Keep Decoding Parallel with Effective Knowledge Distillation from Language Models to End-to-end Speech Recognisers (2401.11700v1)

Published 22 Jan 2024 in cs.CL, cs.SD, and eess.AS

Abstract: This study presents a novel approach for knowledge distillation (KD) from a BERT teacher model to an automatic speech recognition (ASR) model using intermediate layers. To distil the teacher's knowledge, we use an attention decoder that learns from BERT's token probabilities. Our method shows that LLM (LM) information can be more effectively distilled into an ASR model using both the intermediate layers and the final layer. By using the intermediate layers as distillation target, we can more effectively distil LM knowledge into the lower network layers. Using our method, we achieve better recognition accuracy than with shallow fusion of an external LM, allowing us to maintain fast parallel decoding. Experiments on the LibriSpeech dataset demonstrate the effectiveness of our approach in enhancing greedy decoding with connectionist temporal classification (CTC).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Michael Hentschel (23 papers)
  2. Yuta Nishikawa (4 papers)
  3. Tatsuya Komatsu (29 papers)
  4. Yusuke Fujita (37 papers)
Citations (3)