Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CM-PIE: Cross-modal perception for interactive-enhanced audio-visual video parsing (2310.07517v1)

Published 11 Oct 2023 in cs.CV and cs.MM

Abstract: Audio-visual video parsing is the task of categorizing a video at the segment level with weak labels, and predicting them as audible or visible events. Recent methods for this task leverage the attention mechanism to capture the semantic correlations among the whole video across the audio-visual modalities. However, these approaches have overlooked the importance of individual segments within a video and the relationship among them, and tend to rely on a single modality when learning features. In this paper, we propose a novel interactive-enhanced cross-modal perception method~(CM-PIE), which can learn fine-grained features by applying a segment-based attention module. Furthermore, a cross-modal aggregation block is introduced to jointly optimize the semantic representation of audio and visual signals by enhancing inter-modal interactions. The experimental results show that our model offers improved parsing performance on the Look, Listen, and Parse dataset compared to other methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yaru Chen (6 papers)
  2. Ruohao Guo (17 papers)
  3. Xubo Liu (66 papers)
  4. Peipei Wu (5 papers)
  5. Guangyao Li (37 papers)
  6. Zhenbo Li (5 papers)
  7. Wenwu Wang (148 papers)
Citations (4)