Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transformation-Based Models of Video Sequences (1701.08435v3)

Published 29 Jan 2017 in cs.LG and cs.CV

Abstract: In this work we propose a simple unsupervised approach for next frame prediction in video. Instead of directly predicting the pixels in a frame given past frames, we predict the transformations needed for generating the next frame in a sequence, given the transformations of the past frames. This leads to sharper results, while using a smaller prediction model. In order to enable a fair comparison between different video frame prediction models, we also propose a new evaluation protocol. We use generated frames as input to a classifier trained with ground truth sequences. This criterion guarantees that models scoring high are those producing sequences which preserve discriminative features, as opposed to merely penalizing any deviation, plausible or not, from the ground truth. Our proposed approach compares favourably against more sophisticated ones on the UCF-101 data set, while also being more efficient in terms of the number of parameters and computational cost.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Joost van Amersfoort (17 papers)
  2. Anitha Kannan (29 papers)
  3. Marc'Aurelio Ranzato (53 papers)
  4. Arthur Szlam (86 papers)
  5. Du Tran (28 papers)
  6. Soumith Chintala (31 papers)
Citations (73)

Summary

We haven't generated a summary for this paper yet.