Speech Driven Video Editing via an Audio-Conditioned Diffusion Model (2301.04474v3)
Abstract: Taking inspiration from recent developments in visual generative tasks using diffusion models, we propose a method for end-to-end speech-driven video editing using a denoising diffusion model. Given a video of a talking person, and a separate auditory speech recording, the lip and jaw motions are re-synchronized without relying on intermediate structural representations such as facial landmarks or a 3D face model. We show this is possible by conditioning a denoising diffusion model on audio mel spectral features to generate synchronised facial motion. Proof of concept results are demonstrated on both single-speaker and multi-speaker video editing, providing a baseline model on the CREMA-D audiovisual data set. To the best of our knowledge, this is the first work to demonstrate and validate the feasibility of applying end-to-end denoising diffusion models to the task of audio-driven video editing.
- Dan Bigioi (6 papers)
- Shubhajit Basak (5 papers)
- Michał Stypułkowski (12 papers)
- Maciej Zięba (38 papers)
- Hugh Jordan (1 paper)
- Rachel McDonnell (9 papers)
- Peter Corcoran (54 papers)