Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Transfer Learning for Cross-domain Visual Recognition (1711.08904v2)

Published 24 Nov 2017 in cs.CV

Abstract: In many practical visual recognition scenarios, feature distribution in the source domain is generally different from that of the target domain, which results in the emergence of general cross-domain visual recognition problems. To address the problems of visual domain mismatch, we propose a novel semi-supervised adversarial transfer learning approach, which is called Coupled adversarial transfer Domain Adaptation (CatDA), for distribution alignment between two domains. The proposed CatDA approach is inspired by cycleGAN, but leveraging multiple shallow multilayer perceptrons (MLPs) instead of deep networks. Specifically, our CatDA comprises of two symmetric and slim sub-networks, such that the coupled adversarial learning framework is formulated. With such symmetry of two generators, the input data from source/target domain can be fed into the MLP network for target/source domain generation, supervised by two confrontation oriented coupled discriminators. Notably, in order to avoid the critical flaw of high-capacity of the feature extraction function during domain adversarial training, domain specific loss and domain knowledge fidelity loss are proposed in each generator, such that the effectiveness of the proposed transfer network is guaranteed. Additionally, the essential difference from cycleGAN is that our method aims to generate domain-agnostic and aligned features for domain adaptation and transfer learning rather than synthesize realistic images. We show experimentally on a number of benchmark datasets and the proposed approach achieves competitive performance over state-of-the-art domain adaptation and transfer learning approaches.

Citations (15)

Summary

We haven't generated a summary for this paper yet.