Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unveiling the Potential of LLM-Based ASR on Chinese Open-Source Datasets (2405.02132v3)

Published 3 May 2024 in cs.SD, cs.CL, and eess.AS

Abstract: LLMs have demonstrated unparalleled effectiveness in various NLP tasks, and integrating LLMs with automatic speech recognition (ASR) is becoming a mainstream paradigm. Building upon this momentum, our research delves into an in-depth examination of this paradigm on a large open-source Chinese dataset. Specifically, our research aims to evaluate the impact of various configurations of speech encoders, LLMs, and projector modules in the context of the speech foundation encoder-LLM ASR paradigm. Furthermore, we introduce a three-stage training approach, expressly developed to enhance the model's ability to align auditory and textual information. The implementation of this approach, alongside the strategic integration of ASR components, enabled us to achieve the SOTA performance on the AISHELL-1, Test_Net, and Test_Meeting test sets. Our analysis presents an empirical foundation for future research in LLM-based ASR systems and offers insights into optimizing performance using Chinese datasets. We will publicly release all scripts used for data preparation, training, inference, and scoring, as well as pre-trained models and training logs to promote reproducible research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Xuelong Geng (6 papers)
  2. Tianyi Xu (39 papers)
  3. Kun Wei (23 papers)
  4. Hongfei Xue (21 papers)
  5. He Wang (294 papers)
  6. Yangze Li (11 papers)
  7. Pengcheng Guo (55 papers)
  8. Yuhang Dai (3 papers)
  9. Longhao Li (4 papers)
  10. Mingchen Shao (6 papers)
  11. Lei Xie (337 papers)
  12. Bingshen Mu (8 papers)
Citations (8)