Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Action2Sound: Ambient-Aware Generation of Action Sounds from Egocentric Videos (2406.09272v3)

Published 13 Jun 2024 in cs.CV, cs.AI, cs.SD, and eess.AS

Abstract: Generating realistic audio for human actions is important for many applications, such as creating sound effects for films or virtual reality games. Existing approaches implicitly assume total correspondence between the video and audio during training, yet many sounds happen off-screen and have weak to no correspondence with the visuals -- resulting in uncontrolled ambient sounds or hallucinations at test time. We propose a novel ambient-aware audio generation model, AV-LDM. We devise a novel audio-conditioning mechanism to learn to disentangle foreground action sounds from the ambient background sounds in in-the-wild training videos. Given a novel silent video, our model uses retrieval-augmented generation to create audio that matches the visual content both semantically and temporally. We train and evaluate our model on two in-the-wild egocentric video datasets, Ego4D and EPIC-KITCHENS, and we introduce Ego4D-Sounds -- 1.2M curated clips with action-audio correspondence. Our model outperforms an array of existing methods, allows controllable generation of the ambient sound, and even shows promise for generalizing to computer graphics game clips. Overall, our approach is the first to focus video-to-audio generation faithfully on the observed visual content despite training from uncurated clips with natural background sounds.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Changan Chen (31 papers)
  2. Puyuan Peng (21 papers)
  3. Ami Baid (1 paper)
  4. Zihui Xue (23 papers)
  5. Wei-Ning Hsu (76 papers)
  6. Kristen Grauman (136 papers)
  7. David Harwath (55 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.