Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Comprehensive Solution to Connect Speech Encoder and Large Language Model for ASR (2406.17272v1)

Published 25 Jun 2024 in cs.LG

Abstract: Recent works have shown promising results in connecting speech encoders to LLMs for speech recognition. However, several limitations persist, including limited fine-tuning options, a lack of mechanisms to enforce speech-text alignment, and high insertion errors especially in domain mismatch conditions. This paper presents a comprehensive solution to address these issues. We begin by investigating more thoughtful fine-tuning schemes. Next, we propose a matching loss to enhance alignment between modalities. Finally, we explore training and inference methods to mitigate high insertion errors. Experimental results on the Librispeech corpus demonstrate that partially fine-tuning the encoder and LLM using parameter-efficient methods, such as LoRA, is the most cost-effective approach. Additionally, the matching loss improves modality alignment, enhancing performance. The proposed training and inference methods significantly reduce insertion errors.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Van Tung Pham (13 papers)
  2. Yist Lin (1 paper)
  3. Tao Han (233 papers)
  4. Wei Li (1121 papers)
  5. Jun Zhang (1008 papers)
  6. Lu Lu (189 papers)
  7. Yuxuan Wang (239 papers)