Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AR-TTA: A Simple Method for Real-World Continual Test-Time Adaptation (2309.10109v2)

Published 18 Sep 2023 in cs.CV, cs.AI, and cs.LG

Abstract: Test-time adaptation is a promising research direction that allows the source model to adapt itself to changes in data distribution without any supervision. Yet, current methods are usually evaluated on benchmarks that are only a simplification of real-world scenarios. Hence, we propose to validate test-time adaptation methods using the recently introduced datasets for autonomous driving, namely CLAD-C and SHIFT. We observe that current test-time adaptation methods struggle to effectively handle varying degrees of domain shift, often resulting in degraded performance that falls below that of the source model. We noticed that the root of the problem lies in the inability to preserve the knowledge of the source model and adapt to dynamically changing, temporally correlated data streams. Therefore, we enhance the well-established self-training framework by incorporating a small memory buffer to increase model stability and at the same time perform dynamic adaptation based on the intensity of domain shift. The proposed method, named AR-TTA, outperforms existing approaches on both synthetic and more real-world benchmarks and shows robustness across a variety of TTA scenarios. The code is available at https://github.com/dmn-sjk/AR-TTA.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. Revisiting test time adaptation under online evaluation. arXiv preprint arXiv:2304.04795, 2023.
  2. Online continual learning with maximal interfered retrieval. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 11849–11860, 2019.
  3. Class-incremental continual learning into the extended der-verse. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(05):5497–5512, may 2023.
  4. Using hindsight to anchor past knowledge in continual learning. In AAAI Conference on Artificial Intelligence, 2019.
  5. Progressive feature alignment for unsupervised domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 627–636. Computer Vision Foundation / IEEE, 2019.
  6. Contrastive test-time adaptation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 295–305. IEEE, 2022.
  7. Robustbench: a standardized adversarial robustness benchmark. arXiv preprint arXiv:2010.09670, 2020.
  8. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
  9. Robust mean teacher for continual and gradual test-time adaptation. CoRR, abs/2211.13081, 2022.
  10. CARLA: An open urban driving simulator. In Proceedings of the 1st Annual Conference on Robot Learning, pages 1–16, 2017.
  11. Robert M. French. Catastrophic forgetting in connectionist networks. Trends in Cognitive Sciences, 3(4):128–135, 1999.
  12. Domain-adversarial training of neural networks. In Gabriela Csurka, editor, Domain Adaptation in Computer Vision Applications, Advances in Computer Vision and Pattern Recognition, pages 189–209. Springer, 2017.
  13. Soda10m: A large-scale 2d self/semi-supervised object detection dataset for autonomous driving, 2021.
  14. Benchmarking neural network robustness to common corruptions and perturbations. Proceedings of the International Conference on Learning Representations, 2019.
  15. Mecta: Memory-economic continual test-time adaptation. In ICLR, 2023.
  16. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, page 448–456. JMLR.org, 2015.
  17. Contrastive adaptation network for single- and multi-source domain adaptation. IEEE Trans. Pattern Anal. Mach. Intell., 44(4):1793–1804, 2022.
  18. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526, 2017.
  19. Alex Krizhevsky. Learning multiple layers of features from tiny images. pages 32–33, 2009.
  20. Understanding self-training for gradual domain adaptation. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 5468–5479. PMLR, 2020.
  21. Learning without forgetting. In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part IV, volume 9908 of Lecture Notes in Computer Science, pages 614–629, 2016.
  22. Cycle self-training for domain adaptation. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 22968–22981, 2021.
  23. Avalanche: an end-to-end library for continual learning. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2nd Continual Learning in Computer Vision Workshop, 2021.
  24. Representational continuity for unsupervised continual learning. In International Conference on Learning Representations, 2022.
  25. Online continual learning in image classification: An empirical survey. Neurocomputing, 469:28–51, 2022.
  26. TorchVision maintainers and contributors. Torchvision: Pytorch’s computer vision library. https://github.com/pytorch/vision, 2016.
  27. Class-incremental learning: survey and performance evaluation on image classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  28. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation, volume 24, pages 109–165. Elsevier, 1989.
  29. Efficient test-time model adaptation without forgetting. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 16888–16905. PMLR, 2022.
  30. Towards stable test-time adaptation in dynamic wild world. In Internetional Conference on Learning Representations, 2023.
  31. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision, pages 1406–1415, 2019.
  32. icarl: Incremental classifier and representation learning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5533–5542, 2017.
  33. Improving robustness against common corruptions by covariate shift adaptation. Advances in Neural Information Processing Systems, 33:11539–11551, 2020.
  34. SHIFT: a synthetic driving dataset for continuous multi-task domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 21371–21382, June 2022.
  35. Test-time training with self-supervision for generalization under distribution shifts. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 9229–9248. PMLR, 2020.
  36. Clad: A realistic continual learning benchmark for autonomous driving. Neural Networks, 161:659–669, 2023.
  37. Tent: Fully test-time adaptation by entropy minimization. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021.
  38. Deep visual domain adaptation: A survey. Neurocomputing, 312:135–153, 2018.
  39. Continual test-time domain adaptation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 7191–7201. IEEE, 2022.
  40. mixup: Beyond empirical risk minimization. International Conference on Learning Representations, 2018.
  41. Class-incremental learning via dual augmentation. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 14306–14318. Curran Associates, Inc., 2021.
  42. Confidence regularized self-training. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pages 5981–5990. IEEE, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Damian Sójka (4 papers)
  2. Sebastian Cygert (18 papers)
  3. Bartłomiej Twardowski (37 papers)
  4. Tomasz Trzciński (116 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.