Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Domain Adaptation via Rebalanced Sub-domain Alignment (2302.02009v1)

Published 3 Feb 2023 in cs.LG and stat.ML

Abstract: Unsupervised domain adaptation (UDA) is a technique used to transfer knowledge from a labeled source domain to a different but related unlabeled target domain. While many UDA methods have shown success in the past, they often assume that the source and target domains must have identical class label distributions, which can limit their effectiveness in real-world scenarios. To address this limitation, we propose a novel generalization bound that reweights source classification error by aligning source and target sub-domains. We prove that our proposed generalization bound is at least as strong as existing bounds under realistic assumptions, and we empirically show that it is much stronger on real-world data. We then propose an algorithm to minimize this novel generalization bound. We demonstrate by numerical experiments that this approach improves performance in shifted class distribution scenarios compared to state-of-the-art methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yiling Liu (8 papers)
  2. Juncheng Dong (18 papers)
  3. Ziyang Jiang (10 papers)
  4. Ahmed Aloui (10 papers)
  5. Keyu Li (22 papers)
  6. Hunter Klein (1 paper)
  7. Vahid Tarokh (144 papers)
  8. David Carlson (36 papers)
Citations (2)