Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Disentangling Factors of Variation with Cycle-Consistent Variational Auto-Encoders (1804.10469v1)

Published 27 Apr 2018 in cs.CV

Abstract: Generative models that learn disentangled representations for different factors of variation in an image can be very useful for targeted data augmentation. By sampling from the disentangled latent subspace of interest, we can efficiently generate new data necessary for a particular task. Learning disentangled representations is a challenging problem, especially when certain factors of variation are difficult to label. In this paper, we introduce a novel architecture that disentangles the latent space into two complementary subspaces by using only weak supervision in form of pairwise similarity labels. Inspired by the recent success of cycle-consistent adversarial architectures, we use cycle-consistency in a variational auto-encoder framework. Our non-adversarial approach is in contrast with the recent works that combine adversarial training with auto-encoders to disentangle representations. We show compelling results of disentangled latent subspaces on three datasets and compare with recent works that leverage adversarial training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ananya Harsh Jha (8 papers)
  2. Saket Anand (28 papers)
  3. Maneesh Singh (37 papers)
  4. V. S. R. Veeravasarapu (6 papers)
Citations (121)

Summary

We haven't generated a summary for this paper yet.