Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robustified Domain Adaptation (2011.09563v2)

Published 18 Nov 2020 in cs.CV

Abstract: Unsupervised domain adaptation (UDA) is widely used to transfer knowledge from a labeled source domain to an unlabeled target domain with different data distribution. While extensive studies attested that deep learning models are vulnerable to adversarial attacks, the adversarial robustness of models in domain adaptation application has largely been overlooked. This paper points out that the inevitable domain distribution deviation in UDA is a critical barrier to model robustness on the target domain. To address the problem, we propose a novel Class-consistent Unsupervised Robust Domain Adaptation (CURDA) framework for training robust UDA models. With the introduced contrastive robust training and source anchored adversarial contrastive losses, our proposed CURDA framework can effectively robustify UDA models by simultaneously minimizing the data distribution deviation and the distance between target domain clean-adversarial pairs without creating classification confusion. Experiments on several public benchmarks show that CURDA can significantly improve model robustness in the target domain with only minor cost of accuracy on the clean samples.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jiajin Zhang (18 papers)
  2. Hanqing Chao (18 papers)
  3. Pingkun Yan (55 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.