Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sound Source Localization is All about Cross-Modal Alignment (2309.10724v1)

Published 19 Sep 2023 in cs.CV, cs.AI, cs.MM, cs.SD, and eess.AS

Abstract: Humans can easily perceive the direction of sound sources in a visual scene, termed sound source localization. Recent studies on learning-based sound source localization have mainly explored the problem from a localization perspective. However, prior arts and existing benchmarks do not account for a more important aspect of the problem, cross-modal semantic understanding, which is essential for genuine sound source localization. Cross-modal semantic understanding is important in understanding semantically mismatched audio-visual events, e.g., silent objects, or off-screen sounds. To account for this, we propose a cross-modal alignment task as a joint task with sound source localization to better learn the interaction between audio and visual modalities. Thereby, we achieve high localization performance with strong cross-modal semantic understanding. Our method outperforms the state-of-the-art approaches in both sound source localization and cross-modal retrieval. Our work suggests that jointly tackling both tasks is necessary to conquer genuine sound source localization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Arda Senocak (18 papers)
  2. Hyeonggon Ryu (8 papers)
  3. Junsik Kim (36 papers)
  4. Tae-Hyun Oh (75 papers)
  5. Hanspeter Pfister (131 papers)
  6. Joon Son Chung (106 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.