Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distilling the Knowledge of BERT for Sequence-to-Sequence ASR (2008.03822v1)

Published 9 Aug 2020 in cs.CL and eess.AS

Abstract: Attention-based sequence-to-sequence (seq2seq) models have achieved promising results in automatic speech recognition (ASR). However, as these models decode in a left-to-right way, they do not have access to context on the right. We leverage both left and right context by applying BERT as an external LLM to seq2seq ASR through knowledge distillation. In our proposed method, BERT generates soft labels to guide the training of seq2seq ASR. Furthermore, we leverage context beyond the current utterance as input to BERT. Experimental evaluations show that our method significantly improves the ASR performance from the seq2seq baseline on the Corpus of Spontaneous Japanese (CSJ). Knowledge distillation from BERT outperforms that from a transformer LM that only looks at left context. We also show the effectiveness of leveraging context beyond the current utterance. Our method outperforms other LM application approaches such as n-best rescoring and shallow fusion, while it does not require extra inference cost.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hayato Futami (24 papers)
  2. Hirofumi Inaguma (42 papers)
  3. Sei Ueno (4 papers)
  4. Masato Mimura (46 papers)
  5. Shinsuke Sakai (8 papers)
  6. Tatsuya Kawahara (61 papers)
Citations (48)