Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Objects that Sound (1712.06651v2)

Published 18 Dec 2017 in cs.CV, cs.LG, cs.MM, cs.SD, and eess.AS

Abstract: In this paper our objectives are, first, networks that can embed audio and visual inputs into a common space that is suitable for cross-modal retrieval; and second, a network that can localize the object that sounds in an image, given the audio signal. We achieve both these objectives by training from unlabelled video using only audio-visual correspondence (AVC) as the objective function. This is a form of cross-modal self-supervision from video. To this end, we design new network architectures that can be trained for cross-modal retrieval and localizing the sound source in an image, by using the AVC task. We make the following contributions: (i) show that audio and visual embeddings can be learnt that enable both within-mode (e.g. audio-to-audio) and between-mode retrieval; (ii) explore various architectures for the AVC task, including those for the visual stream that ingest a single image, or multiple images, or a single image and multi-frame optical flow; (iii) show that the semantic object that sounds within an image can be localized (using only the sound, no motion or flow information); and (iv) give a cautionary tale on how to avoid undesirable shortcuts in the data preparation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Andrew Zisserman (248 papers)
  2. Relja Arandjelović (18 papers)
Citations (516)

Summary

Objects that Sound: A Cross-Modal Learning Approach

In this paper, the authors delve into the intersection of audio and visual data through a paper on unsupervised cross-modal learning. Their work is centered on leveraging large-scale, unlabeled video datasets to create deep learning architectures capable of associating sound with the corresponding visual source. This is implemented in two main tasks: cross-modal retrieval and audio-visual object localization.

Methodology

The paper introduces three network architectures designed for the respective tasks: AVE-Net for cross-modal retrieval and AVOL-Net for object localization. Each is trained using the audio-visual correspondence (AVC) task, thereby employing a form of self-supervision without any need for pre-labeled data.

AVE-Net aligns visual and audio embeddings through a novel architecture that emphasizes embedding distances, optimized for cross-modal retrieval. The network exploits the Euclidean distance between normalized feature embeddings, forcing alignment and facilitating efficient retrieval across modalities.

AVOL-Net, on the other hand, uses a Multiple Instance Learning (MIL) framework to localize the source of a sound within an image. By utilizing a spatial similarity score between visual descriptors and audio embeddings, the network can effectively pinpoint the sound-generating object in the visual field.

Results

The networks were tested on the AudioSet-Instruments dataset, which includes labeled video segments focused on musical instruments and similar categories. Although labels were not used during training, they served to evaluate the quality of the retrieval and localization. The measured performance through normalized discounted cumulative gain (nDCG) showcased the superiority of AVE-Net against both unsupervised and supervised baselines, confirming the efficacy of the cross-modal alignment technique.

For object localization, AVOL-Net demonstrated strong results, accurately identifying sound-producing items across a variety of contexts, as evidenced by the controlled mismatched audio-visual validation. This highlights its ability to distinguish salient objects driven by the audio context rather than mere visual conspicuity.

Implications and Future Directions

This research has significant implications for the fields of multimedia retrieval, video understanding, and multi-modal learning. By enabling systems to correlate and retrieve data across different modalities without explicit supervision, the methods evolve traditional machine learning approaches that typically demand extensive labeled datasets.

The paper also opens new avenues for future exploration in refining object localization mechanisms, potentially incorporating soft attention frameworks or considering improvements in dealing with background noise and cluttered scenes. Moreover, the prospect of embedding such cross-modal capabilities in robotics, augmented reality, and more interactive AI systems could enhance real-world applications, providing richer, more contextually aware interactions with the environment.

Overall, the paper sets a foundation for future developments in the cross-utilization of audio and visual data, fostering advancements in AI's capacity to understand and leverage the multi-sensory richness of real-world phenomena.

Youtube Logo Streamline Icon: https://streamlinehq.com