Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-source Domain Adaptation in the Deep Learning Era: A Systematic Survey (2002.12169v1)

Published 26 Feb 2020 in cs.LG, cs.CV, and stat.ML

Abstract: In many practical applications, it is often difficult and expensive to obtain enough large-scale labeled data to train deep neural networks to their full capability. Therefore, transferring the learned knowledge from a separate, labeled source domain to an unlabeled or sparsely labeled target domain becomes an appealing alternative. However, direct transfer often results in significant performance decay due to domain shift. Domain adaptation (DA) addresses this problem by minimizing the impact of domain shift between the source and target domains. Multi-source domain adaptation (MDA) is a powerful extension in which the labeled data may be collected from multiple sources with different distributions. Due to the success of DA methods and the prevalence of multi-source data, MDA has attracted increasing attention in both academia and industry. In this survey, we define various MDA strategies and summarize available datasets for evaluation. We also compare modern MDA methods in the deep learning era, including latent space transformation and intermediate domain generation. Finally, we discuss future research directions for MDA.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sicheng Zhao (53 papers)
  2. Bo Li (1107 papers)
  3. Colorado Reed (9 papers)
  4. Pengfei Xu (57 papers)
  5. Kurt Keutzer (200 papers)
Citations (96)

Summary

We haven't generated a summary for this paper yet.