Diversity-aware Buffer for Coping with Temporally Correlated Data Streams in Online Test-time Adaptation (2401.00989v1)
Abstract: Since distribution shifts are likely to occur after a model's deployment and can drastically decrease the model's performance, online test-time adaptation (TTA) continues to update the model during test-time, leveraging the current test data. In real-world scenarios, test data streams are not always independent and identically distributed (i.i.d.). Instead, they are frequently temporally correlated, making them non-i.i.d. Many existing methods struggle to cope with this scenario. In response, we propose a diversity-aware and category-balanced buffer that can simulate an i.i.d. data stream, even in non-i.i.d. scenarios. Combined with a diversity and entropy-weighted entropy loss, we show that a stable adaptation is possible on a wide range of corruptions and natural domain shifts, based on ImageNet. We achieve state-of-the-art results on most considered benchmarks.
- Dataset shift in machine learning, Mit Press, 2008.
- “On interaction between augmentations and corruptions in natural corruption robustness,” Advances in Neural Information Processing Systems, vol. 34, 2021.
- “Tent: Fully test-time adaptation by entropy minimization,” in International Conference on Learning Representations, 2021.
- “Gradual test-time adaptation by self-training and style transfer,” arXiv preprint arXiv:2208.07736, 2022.
- “Towards stable test-time adaptation in dynamic wild world,” in The Eleventh International Conference on Learning Representations, 2023.
- “Note: Robust continual test-time adaptation against temporal correlation,” Advances in Neural Information Processing Systems, vol. 35, pp. 27253–27266, 2022.
- “Robust test-time adaptation in dynamic scenarios,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 15922–15932.
- “Parameter-free online test-time adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 8344–8353.
- “Efficient test-time model adaptation without forgetting,” in International conference on machine learning. PMLR, 2022, pp. 16888–16905.
- “Universal test-time adaptation through weight ensembling, diversity weighting, and prior correction,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 2555–2565.
- “Test-time training with self-supervision for generalization under distribution shifts,” in International Conference on Machine Learning. PMLR, 2020, pp. 9229–9248.
- “Continual test-time domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 7201–7211.
- “Robust mean teacher for continual and gradual test-time adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 7704–7714.
- “Benchmarking neural network robustness to common corruptions and perturbations,” arXiv preprint arXiv:1903.12261, 2019.
- “The many faces of robustness: A critical analysis of out-of-distribution generalization,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 8340–8349.
- “Learning robust global representations by penalizing local predictive power,” Advances in Neural Information Processing Systems, vol. 32, 2019.
- “Moment matching for multi-source domain adaptation,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 1406–1415.
- “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
- “Contrastive test-time adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 295–305.
- Mario Döbler (10 papers)
- Florian Marencke (1 paper)
- Robert A. Marsden (8 papers)
- Bin Yang (320 papers)