Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Investigating Modality Bias in Audio Visual Video Parsing (2203.16860v2)

Published 31 Mar 2022 in cs.CV, cs.MM, cs.SD, eess.AS, and eess.IV

Abstract: We focus on the audio-visual video parsing (AVVP) problem that involves detecting audio and visual event labels with temporal boundaries. The task is especially challenging since it is weakly supervised with only event labels available as a bag of labels for each video. An existing state-of-the-art model for AVVP uses a hybrid attention network (HAN) to generate cross-modal features for both audio and visual modalities, and an attentive pooling module that aggregates predicted audio and visual segment-level event probabilities to yield video-level event probabilities. We provide a detailed analysis of modality bias in the existing HAN architecture, where a modality is completely ignored during prediction. We also propose a variant of feature aggregation in HAN that leads to an absolute gain in F-scores of about 2% and 1.6% for visual and audio-visual events at both segment-level and event-level, in comparison to the existing HAN model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Piyush Singh Pasi (2 papers)
  2. Shubham Nemani (1 paper)
  3. Preethi Jyothi (51 papers)
  4. Ganesh Ramakrishnan (88 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.