Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Speech Driven Video Editing via an Audio-Conditioned Diffusion Model (2301.04474v3)

Published 10 Jan 2023 in cs.CV, cs.LG, cs.SD, and eess.AS

Abstract: Taking inspiration from recent developments in visual generative tasks using diffusion models, we propose a method for end-to-end speech-driven video editing using a denoising diffusion model. Given a video of a talking person, and a separate auditory speech recording, the lip and jaw motions are re-synchronized without relying on intermediate structural representations such as facial landmarks or a 3D face model. We show this is possible by conditioning a denoising diffusion model on audio mel spectral features to generate synchronised facial motion. Proof of concept results are demonstrated on both single-speaker and multi-speaker video editing, providing a baseline model on the CREMA-D audiovisual data set. To the best of our knowledge, this is the first work to demonstrate and validate the feasibility of applying end-to-end denoising diffusion models to the task of audio-driven video editing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Dan Bigioi (6 papers)
  2. Shubhajit Basak (5 papers)
  3. Michał Stypułkowski (12 papers)
  4. Maciej Zięba (38 papers)
  5. Hugh Jordan (1 paper)
  6. Rachel McDonnell (10 papers)
  7. Peter Corcoran (54 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.