Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MINT: Boosting Audio-Language Model via Multi-Target Pre-Training and Instruction Tuning (2402.07485v5)

Published 12 Feb 2024 in cs.SD and eess.AS

Abstract: In the realm of audio-language pre-training (ALP), the challenge of achieving cross-modal alignment is significant. Moreover, the integration of audio inputs with diverse distributions and task variations poses challenges in developing generic audio-LLMs. In this study, we present MINT, a novel ALP framework boosting audio-LLMs through multi-target pre-training and instruction tuning. MINT leverages the strength of frozen pre-trained audio encoders and LLMs (LLM) to improve audio-language pre-training, enabling effective transferablility to both audio-text understanding and generation tasks. To address the modality gap, we introduce Bridge-Net, a trainable module that enhances cross-modality alignment and the model's ability to follow instructions for a variety of audio-text tasks. Bridge-Net is pivotal within MINT, initially enhancing audio-language representation learning through a multi-target pre-training approach. Subsequently, Bridge-Net further boosts audio-to-language generative learning by integrating a frozen LLM with instruction tuning. This integration empowers MINT to extract features in a flexible and effective manner, specifically tailored to the provided instructions for diverse tasks. Experimental results demonstrate that MINT attains superior performance across various audio-language understanding and generation tasks, highlighting its robust generalization capabilities even in zero-shot scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hang Zhao (156 papers)
  2. Yifei Xin (13 papers)
  3. Zhesong Yu (6 papers)
  4. Bilei Zhu (11 papers)
  5. Lu Lu (189 papers)
  6. Zejun Ma (78 papers)
Citations (2)