Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Video Interpolation with Diffusion Models (2404.01203v1)

Published 1 Apr 2024 in cs.CV

Abstract: We present VIDIM, a generative model for video interpolation, which creates short videos given a start and end frame. In order to achieve high fidelity and generate motions unseen in the input data, VIDIM uses cascaded diffusion models to first generate the target video at low resolution, and then generate the high-resolution video conditioned on the low-resolution generated video. We compare VIDIM to previous state-of-the-art methods on video interpolation, and demonstrate how such works fail in most settings where the underlying motion is complex, nonlinear, or ambiguous while VIDIM can easily handle such cases. We additionally demonstrate how classifier-free guidance on the start and end frame and conditioning the super-resolution model on the original high-resolution frames without additional parameters unlocks high-fidelity results. VIDIM is fast to sample from as it jointly denoises all the frames to be generated, requires less than a billion parameters per diffusion model to produce compelling results, and still enjoys scalability and improved quality at larger parameter counts.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (63)
  1. Frozen in time: A joint video and image encoder for end-to-end retrieval. In IEEE International Conference on Computer Vision, 2021.
  2. A database and evaluation methodology for optical flow. In ICCV, 2007.
  3. Align your latents: High-resolution video synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22563–22575, 2023.
  4. A naturalistic open source movie for optical flow evaluation. In European Conf. on Computer Vision (ECCV), pages 611–625, 2012.
  5. Wavegrad: Estimating gradients for waveform generation. arXiv preprint arXiv:2009.00713, 2020.
  6. Fsrnet: End-to-end learning face super-resolution with facial priors. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2492–2501, 2018.
  7. Pixel recursive super resolution. In Proceedings of the IEEE international conference on computer vision, pages 5439–5448, 2017.
  8. St-mfnet: A spatio-temporal multi-flow network for frame interpolation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3521–3531, 2022.
  9. Ldmvfi: Video frame interpolation with latent diffusion models. arXiv preprint arXiv:2303.09508, 2023.
  10. Scaling vision transformers to 22 billion parameters. In International Conference on Machine Learning, pages 7480–7512. PMLR, 2023.
  11. Video frame interpolation: A comprehensive survey. ACM Trans. Multimedia Comput. Commun. Appl., 19(2s), 2023.
  12. Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020.
  13. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
  14. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022.
  15. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020.
  16. Cascaded diffusion models for high fidelity image generation. The Journal of Machine Learning Research, 23(1):2249–2281, 2022a.
  17. Cascaded diffusion models for high fidelity image generation. The Journal of Machine Learning Research, 23(1):2249–2281, 2022b.
  18. Video diffusion models. arXiv:2204.03458, 2022c.
  19. simple diffusion: End-to-end diffusion for high resolution images. arXiv preprint arXiv:2301.11093, 2023.
  20. Real-time intermediate flow estimation for video frame interpolation. In Proceedings of the European Conference on Computer Vision (ECCV), 2022.
  21. Super slomo: High quality estimation of multiple intermediate frames for video interpolation. CoRR, abs/1712.00080, 2017.
  22. Variational diffusion models. Advances in neural information processing systems, 34:21696–21707, 2021.
  23. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  24. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  25. Ifrnet: Intermediate feature refine network for efficient frame interpolation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
  26. Amt: All-pairs multi-field transforms for efficient frame interpolation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
  27. Zero-1-to-3: Zero-shot one image to 3d object. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9298–9309, 2023.
  28. Enhanced quadratic video interpolation. CoRR, abs/2009.04642, 2020.
  29. Video frame interpolation with transformer, 2022.
  30. Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In Proceedings of the ieee/cvf conference on computer vision and pattern recognition, pages 2437–2445, 2020.
  31. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pages 8162–8171. PMLR, 2021.
  32. Softmax splatting for video frame interpolation. CoRR, abs/2003.05534, 2020.
  33. Asymmetric bilateral motion estimation for video frame interpolation. In International Conference on Computer Vision, 2021.
  34. Film: Visual reasoning with a general conditioning layer. In Proceedings of the AAAI conference on artificial intelligence, 2018.
  35. The 2017 davis challenge on video object segmentation. arXiv preprint arXiv:1704.00675, 2017.
  36. Dreamfusion: Text-to-3d using 2d diffusion. arXiv preprint arXiv:2209.14988, 2022.
  37. Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1–16. IEEE, 2020.
  38. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1(2):3, 2022.
  39. Film: Frame interpolation for large motion. In European Conference on Computer Vision (ECCV), 2022.
  40. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022.
  41. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer, 2015.
  42. Denoising diffusion probabilistic models for robust image super-resolution in the wild. arXiv preprint arXiv:2302.07864, 2023.
  43. Palette: Image-to-image diffusion models. In ACM SIGGRAPH 2022 Conference Proceedings, pages 1–10, 2022a.
  44. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35:36479–36494, 2022b.
  45. Image super-resolution via iterative refinement. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4):4713–4726, 2022c.
  46. Progressive distillation for fast sampling of diffusion models. arXiv preprint arXiv:2202.00512, 2022.
  47. XVFI: extreme video frame interpolation. CoRR, abs/2103.16206, 2021.
  48. Make-a-video: Text-to-video generation without text-video data. arXiv preprint arXiv:2209.14792, 2022.
  49. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, pages 2256–2265. PMLR, 2015.
  50. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32, 2019.
  51. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
  52. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014.
  53. Raft: Recurrent all-pairs field transforms for optical flow, 2020.
  54. Towards accurate generative models of video: A new metric & challenges. arXiv preprint arXiv:1812.01717, 2018.
  55. Phenaki: Variable length video generation from open domain textual description. arXiv preprint arXiv:2210.02399, 2022.
  56. Mcvd-masked conditional video diffusion for prediction, generation, and interpolation. Advances in neural information processing systems, 35:23371–23385, 2022.
  57. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004.
  58. Novel view synthesis with diffusion models. arXiv preprint arXiv:2210.04628, 2022.
  59. Quadratic video interpolation. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2019.
  60. Video enhancement with task-oriented flow. International Journal of Computer Vision (IJCV), 127(8):1106–1125, 2019.
  61. Extracting motion and appearance via inter-frame attention for efficient video frame interpolation, 2023.
  62. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586–595, 2018.
  63. Tryondiffusion: A tale of two unets. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4606–4615, 2023.
Citations (22)

Summary

  • The paper introduces VIDIM, a novel generative approach that leverages cascaded diffusion models for video interpolation.
  • It employs a two-step process with a base diffusion model for low-resolution generation followed by super-resolution refinement using temporal attention.
  • Empirical evaluations show VIDIM achieves superior performance over state-of-the-art methods on benchmarks like FVD, validating its effectiveness in complex motion scenarios.

Exploring the Frontier of Video Interpolation with VIDIM: A Generative Approach

Introduction

Video interpolation involves creating intermediate frames between two consecutive frames of a video, aiming to either increase the frame rate or generate slow-motion videos. Traditional methods have largely relied on linear motion estimations or optical flow algorithms, which often struggle with complex, non-linear motions or ambiguous scenarios. In this paper, we introduce Video Interpolation with Diffusion Models (VIDIM), a novel generative approach that leverages cascaded diffusion models to tackle these challenges head-on. VIDIM significantly outperforms existing state-of-the-art methods in handling complex and ambiguous motion, generating high-quality, plausible videos even in the toughest scenarios.

Methodology

Cascaded Diffusion Models for Video Generation

VIDIM's architecture employs a two-step generative process. Initially, it generates the target video at a lower resolution using a base diffusion model conditioned on start and end frames. Subsequently, a super-resolution model conditioned on this low-resolution video and the original high-resolution frames synthesizes the final high-resolution video. This cascaded approach, inspired by previous successes in the field, ensures that VIDIM can capture fine details and maintain temporal consistency across frames.

Architectural Innovations and Training Regimen

The paper introduces several key innovations in the model's architecture and training process. Notably, VIDIM uses a UNet architecture adapted for video by permitting mixing of feature maps across frames through temporal attention blocks. Furthermore, it incorporates a novel technique for frame conditioning that involves setting fake noise levels for the conditioning frames, enabling information from these frames to propagate through the network without extra parameters. The models employ classifier-free guidance to dramatically enhance sample quality, a critical factor in achieving realistic video interpolation results.

During training, VIDIM models are optimized using a continuous-time objective based on the evidence lower bound (ELBO), with adjustments for video-specific dynamics. Training leverages large-scale video datasets, with procedures in place to filter out undesirable examples, such as those with rapid scene cuts, ensuring that the models learn from relevant data.

Empirical Evaluation

Benchmarking Against State-of-the-Art

VIDIM's performance was extensively evaluated against several state-of-the-art video interpolation methods across challenging datasets derived from the Davis and UCF101 collections. The evaluation focused on both generative metrics, such as Frechét Video Distance (FVD), and traditional reconstruction-based metrics. VIDIM consistently outshone the baseline models, especially in scenarios characterized by large and ambiguous motion, validating its superior capability to generate plausible and temporally consistent videos.

User Study

A user paper involving video quadruplets generated from the same input frame pairs accentuated VIDIM's advantages. Participants overwhelmingly preferred VIDIM-generated videos over those produced by baseline models, underlining its effectiveness in producing high-quality, realistic videos even under difficult conditions.

Ablations and Further Insights

The paper carried out ablations to dissect the contributions of various components, particularly highlighting the importance of explicit frame conditioning and classifier-free guidance in achieving optimal results. Scalability tests further demonstrated VIDIM's capacity to improve with larger models, though balancing the parameter count in both base and super-resolution models was crucial for maximizing quality.

Conclusion and Future Directions

VIDIM represents a significant advancement in video interpolation, notably for scenarios that have historically posed challenges for generative models. By leveraging cascaded diffusion models and novel architectural tweaks, VIDIM sets new standards for video interpolation quality. Future work might explore its application to other video generation tasks, extend its capabilities to arbitrary aspect ratios, or further refine super-resolution models to enhance quality. The findings promise exciting developments in video processing and generative modeling, paving the way for more realistic and complex video generation tasks.

Reddit Logo Streamline Icon: https://streamlinehq.com