Mitigating the Bias in the Model for Continual Test-Time Adaptation
Abstract: Continual Test-Time Adaptation (CTA) is a challenging task that aims to adapt a source pre-trained model to continually changing target domains. In the CTA setting, a model does not know when the target domain changes, thus facing a drastic change in the distribution of streaming inputs during the test-time. The key challenge is to keep adapting the model to the continually changing target domains in an online manner. We find that a model shows highly biased predictions as it constantly adapts to the chaining distribution of the target data. It predicts certain classes more often than other classes, making inaccurate over-confident predictions. This paper mitigates this issue to improve performance in the CTA scenario. To alleviate the bias issue, we make class-wise exponential moving average target prototypes with reliable target samples and exploit them to cluster the target features class-wisely. Moreover, we aim to align the target distributions to the source distribution by anchoring the target feature to its corresponding source prototype. With extensive experiments, our proposed method achieves noteworthy performance gain when applied on top of existing CTA methods without substantial adaptation time overhead.
- Pseudo-labeling and confirmation bias in deep semi-supervised learning. In 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE, 2020.
- Ttaps: Test-time adaption by aligning prototypes using self-supervision. In 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022.
- A probabilistic framework for lifelong test-time adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3582–3591, 2023.
- Progressive feature alignment for unsupervised domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 627–636, 2019.
- A simple framework for contrastive learning of visual representations. In International conference on machine learning, pp. 1597–1607. PMLR, 2020.
- Improving test-time adaptation via shift-agnostic weight regularization and nearest source prototypes, 2022.
- Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009.
- Robust mean teacher for continual and gradual test-time adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7704–7714, 2023.
- Unsupervised domain adaptation by backpropagation. In International Conference on Machine Learning (ICML), 2015.
- Back to the source: Diffusion-driven adaptation to test-time corruption. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11786–11796, 2023.
- Note: Robust continual test-time adaptation against temporal correlation, 2022.
- Sotta: Robust test-time adaptation on noisy data streams. arXiv preprint arXiv:2310.10074, 2023.
- Bootstrap your own latent-a new approach to self-supervised learning. Advances in neural information processing systems, 33:21271–21284, 2020.
- On calibration of modern neural networks. In International conference on machine learning, pp. 1321–1330. PMLR, 2017.
- Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9729–9738, 2020.
- Benchmarking neural network robustness to common corruptions and perturbations. Proceedings of the International Conference on Learning Representations, 2019.
- Augmix: A simple data processing method to improve robustness and uncertainty. arXiv preprint arXiv:1912.02781, 2019.
- The many faces of robustness: A critical analysis of out-of-distribution generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8340–8349, 2021.
- Mecta: Memory-economic continual test-time model adaptation. In The Eleventh International Conference on Learning Representations, 2022.
- Test-time classifier adjustment module for modelagnostic domain generalization. In Advances in Neural Information Processing Systems (NeurIPS), 2021.
- Test-time adaptation via self-training with nearest neighbor information. In International Conference on Learning Representations (ICLR), 2023.
- Cafa: Class-aware feature alignment for test-time adaptation. arXiv preprint arXiv:2206.00205, 2022.
- Self-training and adversarial background regularization for unsupervised domain adaptive one-stage object detection, 2019.
- Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Wilds: A benchmark of in-the-wild distribution shifts. In International Conference on Machine Learning, pp. 5637–5664. PMLR, 2021.
- Learning multiple layers of features from tiny images. 2009.
- Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International Conference on Machine Learning (ICML), 2020.
- Ttn: A domain-shift aware batch normalization in test-time adaptation, 2023.
- Cycle self-training for domain adaptation. In Advances in neural information processing systems, 2021a.
- Unbiased teacher for semi-supervised object detection. arXiv preprint arXiv:2102.09480, 2021b.
- Conditional adversarial domain adaptation. Advances in neural information processing systems, 31, 2018.
- Adversarial style mining for one-shot unsupervised domain adaptation, 2020.
- Actmad: Activation matching to align distributions for test-time-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 24152–24161, 2023.
- Obtaining well calibrated probabilities using bayesian binning. In Proceedings of the AAAI conference on artificial intelligence, volume 29, 2015.
- Efficient test-time model adaptation without forgetting, 2022.
- Towards stable test-time adaptation in dynamic wild world. arXiv preprint arXiv:2302.12400, 2023.
- Label shift adapter for test-time adaptation under covariate and label shifts. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 16421–16431, 2023.
- Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pp. 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.
- Diffusion-tta: Test-time adaptation of discriminative models via generative feedback. arXiv e-prints, pp. arXiv–2311, 2023.
- Improving robustness against common corruptions by covariate shift adaptation. In Advances in neural information processing systems, 2020.
- Wasserstein distance guided representation learning for domain adaptation, 2018.
- A simple semi-supervised learning framework for object detection. arXiv preprint arXiv:2005.04757, 2020.
- Ecotta: Memory-efficient continual test-time adaptation via self-distilled regularization, 2023.
- Revisiting realistic test-time training: Sequential inference and adaptation by anchored clustering. Advances in Neural Information Processing Systems, 35:17543–17555, 2022.
- Deep coral: Correlation alignment for deep domain adaptation. In ECCV 2016, 2016.
- Test-time training with self-supervision for generalization under distribution shifts. In International Conference on Machine Learning (ICML), 2020.
- Discriminative adversarial domain adaptation, 2020.
- Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7167–7176, 2017.
- Tent: Fully test-time adaptation by entropy minimization. In International Conference on Learning Representations (ICLR), 2020.
- Continual test-time domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7201–7211, 2022.
- Feature alignment and uniformity for test time adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20050–20060, 2023.
- Adversarial domain adaptation with domain mixup, 2020.
- When source-free domain adaptation meets learning with noisy labels. arXiv preprint arXiv:2301.13381, 2023.
- Robust test-time adaptation in dynamic scenarios. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15922–15932, 2023a.
- Tea: Test-time energy adaptation. arXiv preprint arXiv:2311.14402, 2023b.
- Towards principled disentanglement for domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8024–8034, 2022a.
- Memo: Test time robustness via adaptation and augmentation, 2022b.
- Delta: degradation-free fully test-time adaptation. arXiv preprint arXiv:2301.13018, 2023.
- Unpaired image-to-image translation using cycle-consistent adversarial networks. 2020.
- Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In European Conference on Computer Vision (ECCV), 2018.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.