Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-modal supervised learning for better acoustic representations (1911.07917v2)

Published 15 Nov 2019 in cs.CV, cs.LG, cs.SD, and eess.AS

Abstract: Obtaining large-scale human-labeled datasets to train acoustic representation models is a very challenging task. On the contrary, we can easily collect data with machine-generated labels. In this work, we propose to exploit machine-generated labels to learn better acoustic representations, based on the synchronization between vision and audio. Firstly, we collect a large-scale video dataset with 15 million samples, which totally last 16,320 hours. Each video is 3 to 5 seconds in length and annotated automatically by publicly available visual and audio classification models. Secondly, we train various classical convolutional neural networks (CNNs) including VGGish, ResNet 50 and Mobilenet v2. We also make several improvements to VGGish and achieve better results. Finally, we transfer our models on three external standard benchmarks for audio classification task, and achieve significant performance boost over the state-of-the-art results. Models and codes are available at: https://github.com/Deeperjia/vgg-like-audio-models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shaoyong Jia (1 paper)
  2. Xin Shu (10 papers)
  3. Yang Yang (884 papers)
  4. Dawei Liang (6 papers)
  5. Qiyue Liu (2 papers)
  6. Junhui Liu (23 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.