Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A study on joint modeling and data augmentation of multi-modalities for audio-visual scene classification (2203.04114v3)

Published 7 Mar 2022 in cs.MM, cs.CV, cs.SD, and eess.AS

Abstract: In this paper, we propose two techniques, namely joint modeling and data augmentation, to improve system performances for audio-visual scene classification (AVSC). We employ pre-trained networks trained only on image data sets to extract video embedding; whereas for audio embedding models, we decide to train them from scratch. We explore different neural network architectures for joint modeling to effectively combine the video and audio modalities. Moreover, data augmentation strategies are investigated to increase audio-visual training set size. For the video modality the effectiveness of several operations in RandAugment is verified. An audio-video joint mixup scheme is proposed to further improve AVSC performances. Evaluated on the development set of TAU Urban Audio Visual Scenes 2021, our final system can achieve the best accuracy of 94.2% among all single AVSC systems submitted to DCASE 2021 Task 1b.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Qing Wang (341 papers)
  2. Jun Du (130 papers)
  3. Siyuan Zheng (1 paper)
  4. Yunqing Li (4 papers)
  5. Yajian Wang (3 papers)
  6. Yuzhong Wu (13 papers)
  7. Hu Hu (18 papers)
  8. Chao-Han Huck Yang (89 papers)
  9. Sabato Marco Siniscalchi (46 papers)
  10. Yannan Wang (23 papers)
  11. Chin-Hui Lee (52 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.