Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Adversarial Perturbation Oriented Domain Adaptation Approach for Semantic Segmentation (1912.08954v1)

Published 18 Dec 2019 in cs.CV

Abstract: We focus on Unsupervised Domain Adaptation (UDA) for the task of semantic segmentation. Recently, adversarial alignment has been widely adopted to match the marginal distribution of feature representations across two domains globally. However, this strategy fails in adapting the representations of the tail classes or small objects for semantic segmentation since the alignment objective is dominated by head categories or large objects. In contrast to adversarial alignment, we propose to explicitly train a domain-invariant classifier by generating and defensing against pointwise feature space adversarial perturbations. Specifically, we firstly perturb the intermediate feature maps with several attack objectives (i.e., discriminator and classifier) on each individual position for both domains, and then the classifier is trained to be invariant to the perturbations. By perturbing each position individually, our model treats each location evenly regardless of the category or object size and thus circumvents the aforementioned issue. Moreover, the domain gap in feature space is reduced by extrapolating source and target perturbed features towards each other with attack on the domain discriminator. Our approach achieves the state-of-the-art performance on two challenging domain adaptation tasks for semantic segmentation: GTA5 -> Cityscapes and SYNTHIA -> Cityscapes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jihan Yang (19 papers)
  2. Ruijia Xu (9 papers)
  3. Ruiyu Li (14 papers)
  4. Xiaojuan Qi (133 papers)
  5. Xiaoyong Shen (27 papers)
  6. Guanbin Li (177 papers)
  7. Liang Lin (318 papers)
Citations (93)

Summary

We haven't generated a summary for this paper yet.