Papers
Topics
Authors
Recent
Search
2000 character limit reached

q-VAE for Disentangled Representation Learning and Latent Dynamical Systems

Published 4 Mar 2020 in cs.LG, cs.RO, and stat.ML | (2003.01852v3)

Abstract: A variational autoencoder (VAE) derived from Tsallis statistics called q-VAE is proposed. In the proposed method, a standard VAE is employed to statistically extract latent space hidden in sampled data, and this latent space helps make robots controllable in feasible computational time and cost. To improve the usefulness of the latent space, this paper focuses on disentangled representation learning, e.g., $\beta$-VAE, which is the baseline for it. Starting from a Tsallis statistics perspective, a new lower bound for the proposed q-VAE is derived to maximize the likelihood of the sampled data, which can be considered an adaptive $\beta$-VAE with deformed Kullback-Leibler divergence. To verify the benefits of the proposed q-VAE, a benchmark task to extract the latent space from the MNIST dataset was performed. The results demonstrate that the proposed q-VAE improved disentangled representation while maintaining the reconstruction accuracy of the data. In addition, it relaxes the independency condition between data, which is demonstrated by learning the latent dynamics of nonlinear dynamical systems. By combining disentangled representation, the proposed q-VAE achieves stable and accurate long-term state prediction from the initial state and the action sequence. The dataset for hexapod walking is available on IEEE Dataport, doi: https://dx.doi.org/10.21227/99af-jw71.

Citations (16)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.