Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Leveraging Pre-trained BERT for Audio Captioning (2203.02838v2)

Published 6 Mar 2022 in eess.AS, cs.AI, and cs.SD

Abstract: Audio captioning aims at using natural language to describe the content of an audio clip. Existing audio captioning systems are generally based on an encoder-decoder architecture, in which acoustic information is extracted by an audio encoder and then a language decoder is used to generate the captions. Training an audio captioning system often encounters the problem of data scarcity. Transferring knowledge from pre-trained audio models such as Pre-trained Audio Neural Networks (PANNs) have recently emerged as a useful method to mitigate this issue. However, there is less attention on exploiting pre-trained LLMs for the decoder, compared with the encoder. BERT is a pre-trained LLM that has been extensively used in NLP tasks. Nevertheless, the potential of BERT as the language decoder for audio captioning has not been investigated. In this study, we demonstrate the efficacy of the pre-trained BERT model for audio captioning. Specifically, we apply PANNs as the encoder and initialize the decoder from the public pre-trained BERT models. We conduct an empirical study on the use of these BERT models for the decoder in the audio captioning model. Our models achieve competitive results with the existing audio captioning methods on the AudioCaps dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Xubo Liu (66 papers)
  2. Xinhao Mei (24 papers)
  3. Qiushi Huang (23 papers)
  4. Jianyuan Sun (11 papers)
  5. Jinzheng Zhao (18 papers)
  6. Haohe Liu (59 papers)
  7. Mark D. Plumbley (114 papers)
  8. Volkan Kılıç (8 papers)
  9. Wenwu Wang (148 papers)
Citations (27)