Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluating Off-the-Shelf Machine Listening and Natural Language Models for Automated Audio Captioning (2110.07410v1)

Published 14 Oct 2021 in cs.LG, cs.CL, cs.SD, and eess.AS

Abstract: Automated audio captioning (AAC) is the task of automatically generating textual descriptions for general audio signals. A captioning system has to identify various information from the input signal and express it with natural language. Existing works mainly focus on investigating new methods and try to improve their performance measured on existing datasets. Having attracted attention only recently, very few works on AAC study the performance of existing pre-trained audio and natural language processing resources. In this paper, we evaluate the performance of off-the-shelf models with a Transformer-based captioning approach. We utilize the freely available Clotho dataset to compare four different pre-trained machine listening models, four word embedding models, and their combinations in many different settings. Our evaluation suggests that YAMNet combined with BERT embeddings produces the best captions. Moreover, in general, fine-tuning pre-trained word embeddings can lead to better performance. Finally, we show that sequences of audio embeddings can be processed using a Transformer encoder to produce higher-quality captions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Benno Weck (9 papers)
  2. Xavier Favory (12 papers)
  3. Konstantinos Drossos (44 papers)
  4. Xavier Serra (82 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.