Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sound-Guided Semantic Video Generation (2204.09273v4)

Published 20 Apr 2022 in cs.CV and cs.AI

Abstract: The recent success in StyleGAN demonstrates that pre-trained StyleGAN latent space is useful for realistic video generation. However, the generated motion in the video is usually not semantically meaningful due to the difficulty of determining the direction and magnitude in the StyleGAN latent space. In this paper, we propose a framework to generate realistic videos by leveraging multimodal (sound-image-text) embedding space. As sound provides the temporal contexts of the scene, our framework learns to generate a video that is semantically consistent with sound. First, our sound inversion module maps the audio directly into the StyleGAN latent space. We then incorporate the CLIP-based multimodal embedding space to further provide the audio-visual relationships. Finally, the proposed frame generator learns to find the trajectory in the latent space which is coherent with the corresponding sound and generates a video in a hierarchical manner. We provide the new high-resolution landscape video dataset (audio-visual pair) for the sound-guided video generation task. The experiments show that our model outperforms the state-of-the-art methods in terms of video quality. We further show several applications including image and video editing to verify the effectiveness of our method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Seung Hyun Lee (10 papers)
  2. Gyeongrok Oh (7 papers)
  3. Wonmin Byeon (27 papers)
  4. Chanyoung Kim (14 papers)
  5. Won Jeong Ryoo (1 paper)
  6. Sang Ho Yoon (10 papers)
  7. Hyunjun Cho (6 papers)
  8. Jihyun Bae (2 papers)
  9. Jinkyu Kim (51 papers)
  10. Sangpil Kim (35 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.