Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SoundSpaces: Audio-Visual Navigation in 3D Environments (1912.11474v3)

Published 24 Dec 2019 in cs.CV, cs.HC, cs.SD, and eess.AS

Abstract: Moving around in the world is naturally a multisensory experience, but today's embodied agents are deaf---restricted to solely their visual perception of the environment. We introduce audio-visual navigation for complex, acoustically and visually realistic 3D environments. By both seeing and hearing, the agent must learn to navigate to a sounding object. We propose a multi-modal deep reinforcement learning approach to train navigation policies end-to-end from a stream of egocentric audio-visual observations, allowing the agent to (1) discover elements of the geometry of the physical space indicated by the reverberating audio and (2) detect and follow sound-emitting targets. We further introduce SoundSpaces: a first-of-its-kind dataset of audio renderings based on geometrical acoustic simulations for two sets of publicly available 3D environments (Matterport3D and Replica), and we instrument Habitat to support the new sensor, making it possible to insert arbitrary sound sources in an array of real-world scanned environments. Our results show that audio greatly benefits embodied visual navigation in 3D spaces, and our work lays groundwork for new research in embodied AI with audio-visual perception.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Changan Chen (31 papers)
  2. Unnat Jain (25 papers)
  3. Carl Schissler (5 papers)
  4. Sebastia Vicenc Amengual Gari (1 paper)
  5. Ziad Al-Halah (27 papers)
  6. Vamsi Krishna Ithapu (24 papers)
  7. Philip Robinson (4 papers)
  8. Kristen Grauman (136 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.