Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unsupervised BatchNorm Adaptation (UBNA): A Domain Adaptation Method for Semantic Segmentation Without Using Source Domain Representations (2011.08502v2)

Published 17 Nov 2020 in cs.CV

Abstract: In this paper we present a solution to the task of "unsupervised domain adaptation (UDA) of a given pre-trained semantic segmentation model without relying on any source domain representations". Previous UDA approaches for semantic segmentation either employed simultaneous training of the model in the source and target domains, or they relied on an additional network, replaying source domain knowledge to the model during adaptation. In contrast, we present our novel Unsupervised BatchNorm Adaptation (UBNA) method, which adapts a given pre-trained model to an unseen target domain without using -- beyond the existing model parameters from pre-training -- any source domain representations (neither data, nor networks) and which can also be applied in an online setting or using just a few unlabeled images from the target domain in a few-shot manner. Specifically, we partially adapt the normalization layer statistics to the target domain using an exponentially decaying momentum factor, thereby mixing the statistics from both domains. By evaluation on standard UDA benchmarks for semantic segmentation we show that this is superior to a model without adaptation and to baseline approaches using statistics from the target domain only. Compared to standard UDA approaches we report a trade-off between performance and usage of source domain representations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Marvin Klingner (17 papers)
  2. Jan-Aike Termöhlen (9 papers)
  3. Jacob Ritterbach (1 paper)
  4. Tim Fingscheidt (56 papers)
Citations (36)

Summary

We haven't generated a summary for this paper yet.