Semantically consistent Video-to-Audio Generation using Multimodal Language Large Model (2404.16305v2)
Abstract: Existing works have made strides in video generation, but the lack of sound effects (SFX) and background music (BGM) hinders a complete and immersive viewer experience. We introduce a novel semantically consistent v ideo-to-audio generation framework, namely SVA, which automatically generates audio semantically consistent with the given video content. The framework harnesses the power of multimodal LLM (MLLM) to understand video semantics from a key frame and generate creative audio schemes, which are then utilized as prompts for text-to-audio models, resulting in video-to-audio generation with natural language as an interface. We show the satisfactory performance of SVA through case study and discuss the limitations along with the future research direction. The project page is available at https://huiz-a.github.io/audio4video.github.io/.
- Video generation models as world simulators. 2024.
- Audiogen: Textually guided audio generation. arXiv preprint arXiv:2209.15352, 2022.
- Simple and controllable music generation. arXiv preprint arXiv:2306.05284, 2023.
- Pika. Homepage, 2024. http://https://pika.art/, Last accessed on 2024-04-25.
- Multimodal large language models: A survey, 2023.
- Gemini Team Google. Gemini: A family of highly capable multimodal models, 2023.
- Digital signal processing (3rd ed.): principles, algorithms, and applications. Prentice-Hall, Inc., USA, 1996.
- Diff-foley: Synchronized video-to-audio synthesis with latent diffusion models, 2023.
- Seeing and hearing: Open-domain visual-audio generation with diffusion latent aligners, 2024.
- Imagebind: One embedding space to bind them all, 2023.
- Conditional generation of audio from video via foley analogies, 2023.
- I hear your true colors: Image guided audio generation, 2023.
- Learning transferable visual models from natural language supervision, 2021.
- Syncfusion: Multimodal onset-synchronized video-to-audio foley synthesis, 2023.
- The benefit of temporally-strong labels in audio event classification, 2021.
- Vggsound: A large-scale audio-visual dataset, 2020.
- Taming visually guided sound generation, 2021.
- Improved techniques for training gans. Cornell University - arXiv,Cornell University - arXiv, Jun 2016.
- Gans trained by a two time-scale update rule converge to a local nash equilibrium. Neural Information Processing Systems,Neural Information Processing Systems, Jan 2017.
- Foleygen: Visually-guided audio generation, 2023.
- Gehui Chen (1 paper)
- Guan'an Wang (6 papers)
- Xiaowen Huang (12 papers)
- Jitao Sang (71 papers)