Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Few Clicks Suffice: Active Test-Time Adaptation for Semantic Segmentation (2312.01835v1)

Published 4 Dec 2023 in cs.CV

Abstract: Test-time adaptation (TTA) adapts the pre-trained models during inference using unlabeled test data and has received a lot of research attention due to its potential practical value. Unfortunately, without any label supervision, existing TTA methods rely heavily on heuristic or empirical studies. Where to update the model always falls into suboptimal or brings more computational resource consumption. Meanwhile, there is still a significant performance gap between the TTA approaches and their supervised counterparts. Motivated by active learning, in this work, we propose the active test-time adaptation for semantic segmentation setup. Specifically, we introduce the human-in-the-loop pattern during the testing phase, which queries very few labels to facilitate predictions and model updates in an online manner. To do so, we propose a simple but effective ATASeg framework, which consists of two parts, i.e., model adapter and label annotator. Extensive experiments demonstrate that ATASeg bridges the performance gap between TTA methods and their supervised counterparts with only extremely few annotations, even one click for labeling surpasses known SOTA TTA methods by 2.6% average mIoU on ACDC benchmark. Empirical results imply that progress in either the model adapter or the label annotator will bring improvements to the ATASeg framework, giving it large research and reality potential.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (78)
  1. Deep batch active learning by diverse, uncertain gradient lower bounds. In ICLR, 2020.
  2. Improved algorithms for neural active learning. In NeurIPS, 2022.
  3. Parameter-free online test-time adaptation. In CVPR, pages 8344–8353, 2022.
  4. A probabilistic framework for lifelong test-time adaptation. In CVPR, pages 3582–3591, 2023.
  5. A survey on online active learning. CoRR, abs/2302.08893, 2023.
  6. Revisiting superpixels for active learning in semantic segmentation with realistic annotation costs. In CVPR, pages 10988–10997, 2021.
  7. Reinforced active learning for image segmentation. In ICLR, 2020.
  8. Worst-case analysis of selective sampling for linear-threshold algorithms. In NeurIPS, pages 241–248, 2004.
  9. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, pages 833–851, 2018.
  10. Improving test-time adaptation via shift-agnostic weight regularization and nearest source prototypes. In ECCV, pages 440–458, 2022.
  11. The cityscapes dataset for semantic urban scene understanding. In CVPR, pages 3213–3223, 2016.
  12. Analysis of perceptron-based active learning. J. Mach. Learn. Res., 10:281–299, 2009.
  13. Robust mean teacher for continual and gradual test-time adaptation. In CVPR, pages 7704–7714, 2023.
  14. Transferable query selection for active domain adaptation. In CVPR, pages 7272–7281, 2021.
  15. Decorate the newcomers: Visual domain prompt for continual test time adaptation. In AAAI, pages 7595–7603.
  16. Visual prompt tuning for test-time domain adaptation. CoRR, abs/2210.04831, 2022.
  17. Robust continual test-time adaptation: Instance-aware BN and prediction-balanced memory. In NeurIPS, 2022.
  18. Steve Hanneke et al. Theory of disagreement-based active learning. Found. Trends Mach. Learn., 7(2-3):131–309, 2014.
  19. Second-order online active learning and its applications. IEEE Trans. Knowl. Data Eng., 30(7):1338–1351, 2018.
  20. Fully test-time adaptation for image segmentation. In MICCAI, pages 251–260, 2021.
  21. Multi-class active learning for image classification. In CVPR, 2009.
  22. Adam: A method for stochastic optimization. In ICLR, 2015.
  23. Generalized robust test-time adaptation in continuous dynamic scenarios. CoRR, abs/2310.04714, 2023.
  24. A comprehensive survey on test-time adaptation under distribution shifts. CoRR, abs/2303.15361, 2023.
  25. TTN: A domain-shift aware batch normalization in test-time adaptation. In ICLR, 2023.
  26. Vida: Homeostatic visual domain adapter for continual test time adaptation. CoRR, abs/2306.04344, 2023.
  27. Stream-based joint exploration-exploitation active learning. In CVPR, pages 1560–1567, 2012.
  28. Online passive-aggressive active learning. Mach. Learn., 103(2):141–183, 2016.
  29. Evaluating prediction-time batch normalization for robustness under covariate shift. CoRR, abs/2006.10963, 2020.
  30. Distribution-aware continual test time adaptation for semantic segmentation. CoRR, abs/2309.13604, 2023.
  31. Effective restoration of source knowledge in continual test time adaptation. CoRR, abs/2311.04991, 2023.
  32. Multi-anchor active domain adaptation for semantic segmentation. In ICCV, pages 9092–9102, 2021.
  33. Efficient test-time model adaptation without forgetting. In ICML, pages 16888–16905, 2022.
  34. Towards stable test-time adaptation in dynamic wild world. In ICLR, 2023.
  35. Layer-wise auto-weighting for non-stationary test-time adaptation. CoRR, abs/2311.05858, 2023.
  36. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, pages 8024–8035, 2019.
  37. Active domain adaptation via clustering uncertainty-weighted embeddings. In ICCV, pages 8505–8514, 2021.
  38. Dataset Shift in Machine Learning. The MIT Press, 2009.
  39. Domain adaptation meets active learning. In ALNLP Workshop, pages 27–32, 2010.
  40. A survey of deep active learning. ACM Comput. Surv., 54(9):180:1–180:40, 2022.
  41. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, pages 234–241, 2015.
  42. Acdc: The adverse conditions dataset with correspondences for semantic driving scene understanding. In ICCV, pages 10765–10775, 2021.
  43. Streaming active learning with deep neural networks. In ICML, pages 30005–30021, 2023.
  44. Active learning for convolutional neural networks: A core-set approach. In ICLR, 2018.
  45. Burr Settles. Active learning literature survey. 2009.
  46. Online active learning of reject option classifiers. In AAAI, pages 5652–5659, 2020.
  47. Online active learning ensemble framework for drifted data streams. Trans. Neural Networks Learn. Syst., 30(2):486–498, 2019.
  48. Gradient and log-based active learning for semantic segmentation of crop and weed for agricultural robots. In ICRA, pages 1350–1356, 2020.
  49. Deep active learning for named entity recognition. In ICLR, 2018.
  50. Labor: Labeling only if required for domain adaptive semantic segmentation. In ICCV, pages 8568–8578, 2021.
  51. Viewal: Active learning with viewpoint entropy for semantic segmentation. In CVPR, pages 9430–9440, 2020.
  52. Variational adversarial active learning. In ICCV, pages 5972–5981, 2019.
  53. Ecotta: Memory-efficient continual test-time adaptation via self-distilled regularization. In CVPR, pages 11920–11929, 2023.
  54. Segmenter: Transformer for semantic segmentation. In ICCV, pages 7242–7252, 2021.
  55. Active adversarial domain adaptation. In WACV, pages 728–737, 2020.
  56. Deep semantic segmentation of natural and medical images: a review. Artif. Intell. Rev., 54(1):137–178, 2021.
  57. Multinet: Real-time joint semantic reasoning for autonomous driving. In IV, pages 1013–1020, 2018.
  58. Adapnet: Adaptive semantic segmentation in adverse environmental conditions. In ICRA, pages 4644–4651, 2017.
  59. A new active labeling method for deep learning. In IJCNN, pages 112–119, 2014.
  60. Tent: Fully test-time adaptation by entropy minimization. In ICLR, 2021a.
  61. Continual test-time domain adaptation. In CVPR, pages 7191–7201, 2022.
  62. Feature alignment and uniformity for test time adaptation. In CVPR, pages 20050–20060, 2023a.
  63. Neural active learning with performance guarantees. In NeurIPS, pages 7510–7521, 2021b.
  64. In search of lost online test-time adaptation: A survey. CoRR, abs/2310.20199, 2023b.
  65. Redal: Region-based and diversity-aware active learning for point cloud semantic segmentation. In ICCV, pages 15510–15519, 2021.
  66. Towards fewer annotations: Active learning via region impurity and prediction uncertainty for domain adaptive semantic segmentation. In CVPR, pages 8058–8068, 2022a.
  67. Active learning for domain adaptation: An energy-based approach. In AAAI, pages 8708–8716, 2022b.
  68. Annotator: An generic active learning baseline for lidar semantic segmentation. In NeurIPS, 2023.
  69. Segformer: Simple and efficient design for semantic segmentation with transformers. In NeurIPS, pages 12077–12090, 2021.
  70. Denseaspp for semantic segmentation in street scenes. In CVPR, pages 3684–3692, 2018.
  71. Temporal coherent test time optimization for robust video classification. In ICLR, 2023.
  72. Actively testing your model while it learns: Realizing label-efficient learning in practice. In NeurIPS, 2023a.
  73. Noise-robust continual test-time domain adaptation. In ACM MM, pages 2654–2662, 2023b.
  74. Robust test-time adaptation in dynamic scenarios. In CVPR, pages 15922–15932, 2023.
  75. Domainadaptor: A novel approach to test-time adaptation. In ICCV, pages 18971–18981, 2023.
  76. MEMO: Test time robustness via adaptation and augmentation. In NeurIPS, 2022.
  77. Fedor Zhdanov. Diverse mini-batch active learning. CoRR, abs/1901.05954, 2019.
  78. ODS: test-time adaptation in the presence of open-world data shift. In ICML, pages 42574–42588, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Longhui Yuan (8 papers)
  2. Shuang Li (203 papers)
  3. Zhuo He (15 papers)
  4. Binhui Xie (19 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.