Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unsupervised Domain Adaptation: A Multi-task Learning-based Method (1803.09208v1)

Published 25 Mar 2018 in cs.CV

Abstract: This paper presents a novel multi-task learning-based method for unsupervised domain adaptation. Specifically, the source and target domain classifiers are jointly learned by considering the geometry of target domain and the divergence between the source and target domains based on the concept of multi-task learning. Two novel algorithms are proposed upon the method using Regularized Least Squares and Support Vector Machines respectively. Experiments on both synthetic and real world cross domain recognition tasks have shown that the proposed methods outperform several state-of-the-art domain adaptation methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jing Zhang (731 papers)
  2. Wanqing Li (53 papers)
  3. Philip Ogunbona (19 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.