Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prefix tuning for automated audio captioning (2303.17489v2)

Published 30 Mar 2023 in eess.AS, cs.MM, and cs.SD

Abstract: Audio captioning aims to generate text descriptions from environmental sounds. One challenge of audio captioning is the difficulty of the generalization due to the lack of audio-text paired training data. In this work, we propose a simple yet effective method of dealing with small-scaled datasets by leveraging a pre-trained LLM. We keep the LLM frozen to maintain the expressivity for text generation, and we only learn to extract global and temporal features from the input audio. To bridge a modality gap between the audio features and the LLM, we employ mapping networks that translate audio features to the continuous vectors the LLM can understand, called prefixes. We evaluate our proposed method on the Clotho and AudioCaps dataset and show our method outperforms prior arts in diverse experimental settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Minkyu Kim (51 papers)
  2. Kim Sung-Bin (15 papers)
  3. Tae-Hyun Oh (75 papers)
Citations (36)