Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Flow-Grounded Spatial-Temporal Video Prediction from Still Images (1807.09755v2)

Published 25 Jul 2018 in cs.CV

Abstract: Existing video prediction methods mainly rely on observing multiple historical frames or focus on predicting the next one-frame. In this work, we study the problem of generating consecutive multiple future frames by observing one single still image only. We formulate the multi-frame prediction task as a multiple time step flow (multi-flow) prediction phase followed by a flow-to-frame synthesis phase. The multi-flow prediction is modeled in a variational probabilistic manner with spatial-temporal relationships learned through 3D convolutions. The flow-to-frame synthesis is modeled as a generative process in order to keep the predicted results lying closer to the manifold shape of real video sequence. Such a two-phase design prevents the model from directly looking at the high-dimensional pixel space of the frame sequence and is demonstrated to be more effective in predicting better and diverse results. Extensive experimental results on videos with different types of motion show that the proposed algorithm performs favorably against existing methods in terms of quality, diversity and human perceptual evaluation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yijun Li (56 papers)
  2. Chen Fang (157 papers)
  3. Jimei Yang (58 papers)
  4. Zhaowen Wang (55 papers)
  5. Xin Lu (165 papers)
  6. Ming-Hsuan Yang (377 papers)
Citations (135)

Summary

We haven't generated a summary for this paper yet.