Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unsupervised Disentanglement without Autoencoding: Pitfalls and Future Directions (2108.06613v1)

Published 14 Aug 2021 in cs.CV and cs.LG

Abstract: Disentangled visual representations have largely been studied with generative models such as Variational AutoEncoders (VAEs). While prior work has focused on generative methods for disentangled representation learning, these approaches do not scale to large datasets due to current limitations of generative models. Instead, we explore regularization methods with contrastive learning, which could result in disentangled representations that are powerful enough for large scale datasets and downstream applications. However, we find that unsupervised disentanglement is difficult to achieve due to optimization and initialization sensitivity, with trade-offs in task performance. We evaluate disentanglement with downstream tasks, analyze the benefits and disadvantages of each regularization used, and discuss future directions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Andrea Burns (11 papers)
  2. Aaron Sarna (10 papers)
  3. Dilip Krishnan (36 papers)
  4. Aaron Maschinot (5 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.