Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment (1903.01689v2)

Published 5 Mar 2019 in cs.LG and stat.ML

Abstract: Domain adaptation addresses the common problem when the target distribution generating our test data drifts from the source (training) distribution. While absent assumptions, domain adaptation is impossible, strict conditions, e.g. covariate or label shift, enable principled algorithms. Recently-proposed domain-adversarial approaches consist of aligning source and target encodings, often motivating this approach as minimizing two (of three) terms in a theoretical bound on target error. Unfortunately, this minimization can cause arbitrary increases in the third term, e.g. they can break down under shifting label distributions. We propose asymmetrically-relaxed distribution alignment, a new approach that overcomes some limitations of standard domain-adversarial algorithms. Moreover, we characterize precise assumptions under which our algorithm is theoretically principled and demonstrate empirical benefits on both synthetic and real datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yifan Wu (102 papers)
  2. Ezra Winston (7 papers)
  3. Divyansh Kaushik (8 papers)
  4. Zachary Lipton (10 papers)
Citations (125)

Summary

We haven't generated a summary for this paper yet.