2000 character limit reached
A Commentary on the Unsupervised Learning of Disentangled Representations (2007.14184v1)
Published 28 Jul 2020 in cs.LG, cs.AI, and stat.ML
Abstract: The goal of the unsupervised learning of disentangled representations is to separate the independent explanatory factors of variation in the data without access to supervision. In this paper, we summarize the results of Locatello et al., 2019, and focus on their implications for practitioners. We discuss the theoretical result showing that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases and the practical challenges it entails. Finally, we comment on our experimental findings, highlighting the limitations of state-of-the-art approaches and directions for future research.
- Francesco Locatello (92 papers)
- Stefan Bauer (102 papers)
- Gunnar Rätsch (59 papers)
- Sylvain Gelly (43 papers)
- Bernhard Schölkopf (412 papers)
- Olivier Bachem (52 papers)
- Mario Lucic (42 papers)