Guiding an Automatic Speech Recognition Decoder Using Large Language Models (2508.02228v1)
Abstract: Automatic Speech Recognition (ASR) consists of an acoustic model (AM) and a LLM (LM). The AM estimates the probability of an acoustic signal based on a sequence of linguistic units, typically phones, characters, or tokens, while the LM assesses the likelihood of a specific sequence of words or tokens. Although LLMs have demonstrated significant potential across various tasks, integrating them into ASR remains an open challenge. By decomposing the maximum a posteriori (MAP) estimator of words (or tokens) given the acoustic signal, we derive an iterative procedure that facilitates a novel integration of the AM and LLM, while maintaining their separability. This approach enables each component to be independently trained and improved using its own data, thereby maximizing the system's performance by leveraging the strengths of both models without requiring joint optimization. We illustrate the effectiveness of our method in comparison to three LLMs: N-gram, GCNN, and TransformerLM across multiple datasets spanning various speech styles, including ALLSSTAR, WSJ0, and TED-LIUM 3. Our experiments involved two acoustic models (wav2vec 2.0 and HuBERT) and three LLMs (GPT-2, LLaMA 2, and Falcon). Notably, our method demonstrates particular efficacy in addressing complex speech sentences, acronyms, and domain-specific vocabulary.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.