Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning from Unlabelled Data with Transformers: Domain Adaptation for Semantic Segmentation of High Resolution Aerial Images (2404.11299v1)

Published 17 Apr 2024 in cs.CV and cs.LG

Abstract: Data from satellites or aerial vehicles are most of the times unlabelled. Annotating such data accurately is difficult, requires expertise, and is costly in terms of time. Even if Earth Observation (EO) data were correctly labelled, labels might change over time. Learning from unlabelled data within a semi-supervised learning framework for segmentation of aerial images is challenging. In this paper, we develop a new model for semantic segmentation of unlabelled images, the Non-annotated Earth Observation Semantic Segmentation (NEOS) model. NEOS performs domain adaptation as the target domain does not have ground truth semantic segmentation masks. The distribution inconsistencies between the target and source domains are due to differences in acquisition scenes, environment conditions, sensors, and times. Our model aligns the learned representations of the different domains to make them coincide. The evaluation results show that NEOS is successful and outperforms other models for semantic segmentation of unlabelled data.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. “DIAL: Deep interactive and active learning for semantic segmentation in remote sensing,” IEEE J Sel Top Appl Ear Obs Rem Se, v. 15, p. 3376-3389, 2022.
  2. “Artificial intelligence to advance Earth observation: a perspective,” arXiv preprint arXiv:2305.08413, 2023.
  3. “Data Fusion Contest 2017 (DFC2017),” IEEE GRSS, 2022. DOI: 10.21227/e56j-eh82.
  4. “An interpretable deep semantic segmentation method for Earth Observation,” In IEEE Int Conf Intelligent Systems (IS), 2022.
  5. “Semi-supervised semantic segmentation in earth observation: The MiniFrance suite, dataset analysis and multi-task network,” Machine Learning (111), p. 3125-3160, 2022.
  6. Scott Workman et al., “Wide-area image geolocalization with aerial reference imagery,” In ICCV, pp. 1–9, 2015.
  7. “Where am I looking at? Joint location and orientation estimation by cross-view matching,” In CVPR, pp. 4064–4072, 2020.
  8. “Are these from the same place? Seeing the unseen in cross-view image geo-localization,” In WACV, p. 3753-3761, 2021.
  9. Jiangtao Peng et al., “Domain adaptation in remote sensing image classification: A survey,” IEEE J Sel Top Appl Earth Obs Remote Sen, v. 15, p. 9842-9859, 2022.
  10. “Domain-adversarial training of neural networks,” JMLR, 2016.
  11. Xiaofeng Liu et al., “Deep unsupervised domain adaptation: A review of recent advances and perspectives,” APSIPA Trans. Signal Inf. Process. 11 (1), 2022.
  12. “Benchmarking domain adaptation methods on aerial datasets,” Sensors, MDPI, 2021.
  13. Ying Chen et al., “Semantic segmentation in aerial images using class-aware unsupervised domain adaptation,” ACM SIGSPATIAL Int Workshop GEOAI, p. 9-16, 2021.
  14. “SegFormer: Simple and efficient design for semantic segmentation with Transformers,” In NeurIPS, 34:12077-12090, 2021.
  15. Libo Wang et al., “UNetFormer: A UNet-like Transformer for efficient semantic segmentation of remote sensing urban scene imagery,” ISPRS Journal Photogrammetry Remote Sensing, v. 190, p. 196-214, 2022.
  16. “Segment anything,” arXiv:2304.02643, 2023.
  17. Yabin Zhang et al., “Domain-symmetric networks for adversarial domain adaptation,” In CVPR, 2019.
  18. Jian Liang et al., “Do we really need to access the source data? Source hypothesis transfer for unsupervised domain adaptation,” In ICML, PMLR 119, 2020.
  19. “Universal domain adaptation,” In CVPR, p. 2720-2729, 2019.
  20. Valerio Marsocci et al., “GeoMultiTaskNet: Remote sensing unsupervised domain adaptation using geographical coordinates,” Workshop CVPR, 2023.
  21. “FixMatch: Simplifying semi-supervised learning with consistency and confidence,” In NeurIPS, v. 33, p. 596-608, 2020.
  22. “MSMatch: Semi-supervised multispectral scene classification with few labels,” IEEE Journal Selected Topics Applied Earth Observations and Remote Sensing, 14:11643-11654, 2021.
  23. “AerialFormer: Multi-resolution Transformer for aerial image segmentation,” arXiv:2306.06842, 2023.
  24. X. He et al., “Swin Transformer embedding U-Net for RS image semantic segmentation,” T Geo RS (60), 2022.
  25. “samgeo: A Python package for segmenting geospatial data with SAM,” Journal of Open Source Software, 8(89), 5663, 2023.
  26. A. Hancharenka, “SAMEO: Segment anything EO tools,” GitHub, 2023.
  27. “Learning from unlabelled data: Domain adaptation for semantic segmentation,” GitHub repository, 2023. http://github.com/ESA-PhiLab/Learning_from_Unlabeled_Data_for_Domain_Adaptation_for_Semantic_Segmentation.
  28. Marius Cordts et al., “The Cityscapes dataset for semantic urban scene understanding,” In Proc. CVPR, 2016.
  29. “Robustness of SAM: Segment anything under corruptions and beyond,” arXiv:2306.07713, 2023.
  30. “Segment anything in high quality,” In NeurIPS, Poster Session 4, 2023.
  31. “Fast segment anything,” arXiv:2306.12156, 2023.
  32. “Semantic segment anything,” 2023. http://github.com/fudan-zvg/Semantic-Segment-Anything.
  33. Feng Li et al., “Semantic-SAM: Segment and recognize anything at any granularity,” arXiv:2307.04767, 2023.
  34. “A semantic segmentation-guided approach for ground-to-aerial image matching,” IGARSS, 2024.
  35. Francesco Pro. (2023), “Ground-to-Aerial Image Matching for Geospatial Applications,” [Unpublished Master’s thesis]. Sapienza University of Rome.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Nikolaos Dionelis (16 papers)
  2. Francesco Pro (3 papers)
  3. Luca Maiano (10 papers)
  4. Irene Amerini (22 papers)
  5. Bertrand Le Saux (59 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.