Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Introducing Intermediate Domains for Effective Self-Training during Test-Time (2208.07736v2)

Published 16 Aug 2022 in cs.CV

Abstract: Experiencing domain shifts during test-time is nearly inevitable in practice and likely results in a severe performance degradation. To overcome this issue, test-time adaptation continues to update the initial source model during deployment. A promising direction are methods based on self-training which have been shown to be well suited for gradual domain adaptation, since reliable pseudo-labels can be provided. In this work, we address two problems that exist when applying self-training in the setting of test-time adaptation. First, adapting a model to long test sequences that contain multiple domains can lead to error accumulation. Second, naturally, not all shifts are gradual in practice. To tackle these challenges, we introduce GTTA. By creating artificial intermediate domains that divide the current domain shift into a more gradual one, effective self-training through high quality pseudo-labels can be performed. To create the intermediate domains, we propose two independent variations: mixup and light-weight style transfer. We demonstrate the effectiveness of our approach on the continual and gradual corruption benchmarks, as well as ImageNet-R. To further investigate gradual shifts in the context of urban scene segmentation, we publish a new benchmark: CarlaTTA. It enables the exploration of several non-stationary domain shifts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Robert A. Marsden (8 papers)
  2. Mario Döbler (10 papers)
  3. Bin Yang (320 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.