Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Discovering Sounding Objects by Audio Queries for Audio Visual Segmentation (2309.09501v1)

Published 18 Sep 2023 in cs.CV

Abstract: Audio visual segmentation (AVS) aims to segment the sounding objects for each frame of a given video. To distinguish the sounding objects from silent ones, both audio-visual semantic correspondence and temporal interaction are required. The previous method applies multi-frame cross-modal attention to conduct pixel-level interactions between audio features and visual features of multiple frames simultaneously, which is both redundant and implicit. In this paper, we propose an Audio-Queried Transformer architecture, AQFormer, where we define a set of object queries conditioned on audio information and associate each of them to particular sounding objects. Explicit object-level semantic correspondence between audio and visual modalities is established by gathering object information from visual features with predefined audio queries. Besides, an Audio-Bridged Temporal Interaction module is proposed to exchange sounding object-relevant information among multiple frames with the bridge of audio features. Extensive experiments are conducted on two AVS benchmarks to show that our method achieves state-of-the-art performances, especially 7.1% M_J and 7.6% M_F gains on the MS3 setting.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Shaofei Huang (20 papers)
  2. Han Li (183 papers)
  3. Yuqing Wang (84 papers)
  4. Hongji Zhu (2 papers)
  5. Jiao Dai (17 papers)
  6. Jizhong Han (48 papers)
  7. Wenge Rong (27 papers)
  8. Si Liu (132 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.