Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VideoPhy: Evaluating Physical Commonsense for Video Generation (2406.03520v2)

Published 5 Jun 2024 in cs.CV, cs.AI, and cs.LG

Abstract: Recent advances in internet-scale video data pretraining have led to the development of text-to-video generative models that can create high-quality videos across a broad range of visual concepts, synthesize realistic motions and render complex objects. Hence, these generative models have the potential to become general-purpose simulators of the physical world. However, it is unclear how far we are from this goal with the existing text-to-video generative models. To this end, we present VideoPhy, a benchmark designed to assess whether the generated videos follow physical commonsense for real-world activities (e.g. marbles will roll down when placed on a slanted surface). Specifically, we curate diverse prompts that involve interactions between various material types in the physical world (e.g., solid-solid, solid-fluid, fluid-fluid). We then generate videos conditioned on these captions from diverse state-of-the-art text-to-video generative models, including open models (e.g., CogVideoX) and closed models (e.g., Lumiere, Dream Machine). Our human evaluation reveals that the existing models severely lack the ability to generate videos adhering to the given text prompts, while also lack physical commonsense. Specifically, the best performing model, CogVideoX-5B, generates videos that adhere to the caption and physical laws for 39.6% of the instances. VideoPhy thus highlights that the video generative models are far from accurately simulating the physical world. Finally, we propose an auto-evaluator, VideoCon-Physics, to assess the performance reliably for the newly released models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Hritik Bansal (38 papers)
  2. Zongyu Lin (15 papers)
  3. Tianyi Xie (13 papers)
  4. Zeshun Zong (11 papers)
  5. Michal Yarom (12 papers)
  6. Yonatan Bitton (36 papers)
  7. Chenfanfu Jiang (59 papers)
  8. Yizhou Sun (149 papers)
  9. Kai-Wei Chang (292 papers)
  10. Aditya Grover (82 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.