Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Domain Adaptation for Semantic Segmentation via Class-Balanced Self-Training (1810.07911v2)

Published 18 Oct 2018 in cs.CV, cs.LG, and cs.MM

Abstract: Recent deep networks achieved state of the art performance on a variety of semantic segmentation tasks. Despite such progress, these models often face challenges in real world wild tasks' where large difference between labeled training/source data and unseen test/target data exists. In particular, such difference is often referred to asdomain gap', and could cause significantly decreased performance which cannot be easily remedied by further increasing the representation power. Unsupervised domain adaptation (UDA) seeks to overcome such problem without target domain labels. In this paper, we propose a novel UDA framework based on an iterative self-training procedure, where the problem is formulated as latent variable loss minimization, and can be solved by alternatively generating pseudo labels on target data and re-training the model with these labels. On top of self-training, we also propose a novel class-balanced self-training framework to avoid the gradual dominance of large classes on pseudo-label generation, and introduce spatial priors to refine generated labels. Comprehensive experiments show that the proposed methods achieve state of the art semantic segmentation performance under multiple major UDA settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yang Zou (43 papers)
  2. Zhiding Yu (94 papers)
  3. B. V. K. Vijaya Kumar (22 papers)
  4. Jinsong Wang (5 papers)
Citations (63)

Summary

We haven't generated a summary for this paper yet.