Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-Domain Latent Modulation for Variational Transfer Learning (2012.11727v1)

Published 21 Dec 2020 in cs.LG and cs.AI

Abstract: We propose a cross-domain latent modulation mechanism within a variational autoencoders (VAE) framework to enable improved transfer learning. Our key idea is to procure deep representations from one data domain and use it as perturbation to the reparameterization of the latent variable in another domain. Specifically, deep representations of the source and target domains are first extracted by a unified inference model and aligned by employing gradient reversal. Second, the learned deep representations are cross-modulated to the latent encoding of the alternate domain. The consistency between the reconstruction from the modulated latent encoding and the generation using deep representation samples is then enforced in order to produce inter-class alignment in the latent space. We apply the proposed model to a number of transfer learning tasks including unsupervised domain adaptation and image-toimage translation. Experimental results show that our model gives competitive performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jinyong Hou (5 papers)
  2. Jeremiah D. Deng (12 papers)
  3. Stephen Cranefield (17 papers)
  4. Xuejie Ding (4 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.