Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sound to Visual Scene Generation by Audio-to-Visual Latent Alignment (2303.17490v1)

Published 30 Mar 2023 in cs.CV, cs.MM, cs.SD, eess.AS, and eess.IV

Abstract: How does audio describe the world around us? In this paper, we propose a method for generating an image of a scene from sound. Our method addresses the challenges of dealing with the large gaps that often exist between sight and sound. We design a model that works by scheduling the learning procedure of each model component to associate audio-visual modalities despite their information gaps. The key idea is to enrich the audio features with visual information by learning to align audio to visual latent space. We translate the input audio to visual features, then use a pre-trained generator to produce an image. To further improve the quality of our generated images, we use sound source localization to select the audio-visual pairs that have strong cross-modal correlations. We obtain substantially better results on the VEGAS and VGGSound datasets than prior approaches. We also show that we can control our model's predictions by applying simple manipulations to the input waveform, or to the latent space.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Kim Sung-Bin (15 papers)
  2. Arda Senocak (18 papers)
  3. Hyunwoo Ha (5 papers)
  4. Andrew Owens (52 papers)
  5. Tae-Hyun Oh (75 papers)
Citations (25)

Summary

We haven't generated a summary for this paper yet.