Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Audio Captioning using Pre-Trained Large-Scale Language Model Guided by Audio-based Similar Caption Retrieval (2012.07331v1)

Published 14 Dec 2020 in eess.AS, cs.CL, and cs.SD

Abstract: The goal of audio captioning is to translate input audio into its description using natural language. One of the problems in audio captioning is the lack of training data due to the difficulty in collecting audio-caption pairs by crawling the web. In this study, to overcome this problem, we propose to use a pre-trained large-scale LLM. Since an audio input cannot be directly inputted into such a LLM, we utilize guidance captions retrieved from a training dataset based on similarities that may exist in different audio. Then, the caption of the audio input is generated by using a pre-trained LLM while referring to the guidance captions. Experimental results show that (i) the proposed method has succeeded to use a pre-trained LLM for audio captioning, and (ii) the oracle performance of the pre-trained model-based caption generator was clearly better than that of the conventional method trained from scratch.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yuma Koizumi (39 papers)
  2. Yasunori Ohishi (29 papers)
  3. Daisuke Niizumi (29 papers)
  4. Daiki Takeuchi (30 papers)
  5. Masahiro Yasuda (22 papers)
Citations (38)