Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adapting Large Language Model with Speech for Fully Formatted End-to-End Speech Recognition (2307.08234v2)

Published 17 Jul 2023 in eess.AS

Abstract: Most end-to-end (E2E) speech recognition models are composed of encoder and decoder blocks that perform acoustic and LLMing functions. Pretrained LLMs have the potential to improve the performance of E2E ASR. However, integrating a pretrained LLM into an E2E speech recognition model has shown limited benefits due to the mismatches between text-based LLMs and those used in E2E ASR. In this paper, we explore an alternative approach by adapting a pretrained LLMs to speech. Our experiments on fully-formatted E2E ASR transcription tasks across various domains demonstrate that our approach can effectively leverage the strengths of pretrained LLMs to produce more readable ASR transcriptions. Our model, which is based on the pretrained LLMs with either an encoder-decoder or decoder-only structure, surpasses strong ASR models such as Whisper, in terms of recognition error rate, considering formats like punctuation and capitalization as well.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Shaoshi Ling (8 papers)
  2. Yuxuan Hu (35 papers)
  3. Shuangbei Qian (1 paper)
  4. Guoli Ye (15 papers)
  5. Yao Qian (37 papers)
  6. Yifan Gong (82 papers)
  7. Ed Lin (3 papers)
  8. Michael Zeng (76 papers)
Citations (11)