Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VAE^2: Preventing Posterior Collapse of Variational Video Predictions in the Wild (2101.12050v1)

Published 28 Jan 2021 in cs.CV

Abstract: Predicting future frames of video sequences is challenging due to the complex and stochastic nature of the problem. Video prediction methods based on variational auto-encoders (VAEs) have been a great success, but they require the training data to contain multiple possible futures for an observed video sequence. This is hard to be fulfilled when videos are captured in the wild where any given observation only has a determinate future. As a result, training a vanilla VAE model with these videos inevitably causes posterior collapse. To alleviate this problem, we propose a novel VAE structure, dabbed VAE-in-VAE or VAE$2$. The key idea is to explicitly introduce stochasticity into the VAE. We treat part of the observed video sequence as a random transition state that bridges its past and future, and maximize the likelihood of a Markov Chain over the video sequence under all possible transition states. A tractable lower bound is proposed for this intractable objective function and an end-to-end optimization algorithm is designed accordingly. VAE$2$ can mitigate the posterior collapse problem to a large extent, as it breaks the direct dependence between future and observation and does not directly regress the determinate future provided by the training data. We carry out experiments on a large-scale dataset called Cityscapes, which contains videos collected from a number of urban cities. Results show that VAE$2$ is capable of predicting diverse futures and is more resistant to posterior collapse than the other state-of-the-art VAE-based approaches. We believe that VAE$2$ is also applicable to other stochastic sequence prediction problems where training data are lack of stochasticity.

Citations (1)

Summary

We haven't generated a summary for this paper yet.