Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation (2406.06890v2)

Published 11 Jun 2024 in cs.CV

Abstract: Image diffusion distillation achieves high-fidelity generation with very few sampling steps. However, applying these techniques directly to video diffusion often results in unsatisfactory frame quality due to the limited visual quality in public video datasets. This affects the performance of both teacher and student video diffusion models. Our study aims to improve video diffusion distillation while improving frame appearance using abundant high-quality image data. We propose motion consistency model (MCM), a single-stage video diffusion distillation method that disentangles motion and appearance learning. Specifically, MCM includes a video consistency model that distills motion from the video teacher model, and an image discriminator that enhances frame appearance to match high-quality image data. This combination presents two challenges: (1) conflicting frame learning objectives, as video distillation learns from low-quality video frames while the image discriminator targets high-quality images; and (2) training-inference discrepancies due to the differing quality of video samples used during training and inference. To address these challenges, we introduce disentangled motion distillation and mixed trajectory distillation. The former applies the distillation objective solely to the motion representation, while the latter mitigates training-inference discrepancies by mixing distillation trajectories from both the low- and high-quality video domains. Extensive experiments show that our MCM achieves the state-of-the-art video diffusion distillation performance. Additionally, our method can enhance frame quality in video diffusion models, producing frames with high aesthetic scores or specific styles without corresponding video data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yuanhao Zhai (11 papers)
  2. Kevin Lin (98 papers)
  3. Zhengyuan Yang (86 papers)
  4. Linjie Li (89 papers)
  5. Jianfeng Wang (149 papers)
  6. Chung-Ching Lin (36 papers)
  7. David Doermann (54 papers)
  8. Junsong Yuan (92 papers)
  9. Lijuan Wang (133 papers)
Citations (5)
Youtube Logo Streamline Icon: https://streamlinehq.com