Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Delayed Fusion: Integrating Large Language Models into First-Pass Decoding in End-to-end Speech Recognition (2501.09258v1)

Published 16 Jan 2025 in cs.CL, cs.SD, and eess.AS

Abstract: This paper presents an efficient decoding approach for end-to-end automatic speech recognition (E2E-ASR) with LLMs. Although shallow fusion is the most common approach to incorporate LLMs into E2E-ASR decoding, we face two practical problems with LLMs. (1) LLM inference is computationally costly. (2) There may be a vocabulary mismatch between the ASR model and the LLM. To resolve this mismatch, we need to retrain the ASR model and/or the LLM, which is at best time-consuming and in many cases not feasible. We propose "delayed fusion," which applies LLM scores to ASR hypotheses with a delay during decoding and enables easier use of pre-trained LLMs in ASR tasks. This method can reduce not only the number of hypotheses scored by the LLM but also the number of LLM inference calls. It also allows re-tokenizion of ASR hypotheses during decoding if ASR and LLM employ different tokenizations. We demonstrate that delayed fusion provides improved decoding speed and accuracy compared to shallow fusion and N-best rescoring using the LibriHeavy ASR corpus and three public LLMs, OpenLLaMA 3B & 7B and Mistral 7B.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Takaaki Hori (41 papers)
  2. Martin Kocour (11 papers)
  3. Adnan Haider (7 papers)
  4. Erik McDermott (9 papers)
  5. Xiaodan Zhuang (9 papers)