Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Two Simple Principles for Diffusion-Based Test-Time Adaptation (2312.05274v2)

Published 8 Dec 2023 in cs.LG and cs.CV

Abstract: Recently, diffusion-based test-time adaptations (TTA) have shown great advances, which leverage a diffusion model to map the images in the unknown test domain to the training domain. The unseen and diverse test domains make diffusion-based TTA an ill-posed problem. In this paper, we unravel two simple principles of the design tricks for diffusion-based methods. Intuitively, \textit{Principle 1} says semantic similarity preserving. We should preserve the semantic similarity between the original and generated test images. \textit{Principle 2} suggests minimal modifications. This principle enables the diffusion to map the test images to the training domain with minimal modifications of the test images. In particular, following the two principles, we propose our simple yet effective principle-guided diffusion-based test-time adaptation method (PDDA). Concretely, following Principle 1, we propose a semantic keeper, the method to preserve feature similarity, where the semantic keeper could filter the corruption introduced from the test domain, thus better preserving the semantics. Following Principle 2, we propose a modification keeper, where we introduce a regularization constraint into the generative process to minimize modifications to the test image. Meanwhile, there is a hidden conflict between the two principles. We further introduce the gradient-based view to unify the direction generated from two principles. Extensive experiments on CIFAR-10C, CIFAR-100C, ImageNet-W, and ImageNet-C with WideResNet-28-10, ResNet-50, Swin-T, and ConvNext-T demonstrate that PDDA significantly performs better than the complex state-of-the-art baselines. Specifically, PDDA achieves 2.4\% average accuracy improvements in ImageNet-C without any training process.

Summary

We haven't generated a summary for this paper yet.