Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Feature-Augmented Deep Networks for Multiscale Building Segmentation in High-Resolution UAV and Satellite Imagery (2505.05321v1)

Published 8 May 2025 in cs.CV and cs.AI

Abstract: Accurate building segmentation from high-resolution RGB imagery remains challenging due to spectral similarity with non-building features, shadows, and irregular building geometries. In this study, we present a comprehensive deep learning framework for multiscale building segmentation using RGB aerial and satellite imagery with spatial resolutions ranging from 0.4m to 2.7m. We curate a diverse, multi-sensor dataset and introduce feature-augmented inputs by deriving secondary representations including Principal Component Analysis (PCA), Visible Difference Vegetation Index (VDVI), Morphological Building Index (MBI), and Sobel edge filters from RGB channels. These features guide a Res-U-Net architecture in learning complex spatial patterns more effectively. We also propose training policies incorporating layer freezing, cyclical learning rates, and SuperConvergence to reduce training time and resource usage. Evaluated on a held-out WorldView-3 image, our model achieves an overall accuracy of 96.5%, an F1-score of 0.86, and an Intersection over Union (IoU) of 0.80, outperforming existing RGB-based benchmarks. This study demonstrates the effectiveness of combining multi-resolution imagery, feature augmentation, and optimized training strategies for robust building segmentation in remote sensing applications.

Summary

  • The paper proposes a feature-augmented deep network architecture, combining diverse derived features with RGB data, to improve the accuracy of multiscale building segmentation in high-resolution satellite and UAV imagery.
  • The proposed model achieved strong performance, including an overall accuracy of 96.5%, an F1-score of 0.86, and an IoU of 0.80 on a WorldView-3 dataset, outperforming RGB-only methods.
  • This method has significant implications for urban monitoring and mapping, offering a robust technique for building extraction applicable to various environmental and urban planning contexts using GIS and remote sensing.

Overview of Feature-Augmented Deep Networks for Multiscale Building Segmentation

The paper explores the development and efficacy of a deep learning framework to address the persistent challenge of accurately segmenting buildings in high-resolution UAV and satellite imagery. Buildings often share spectral similarities with non-building features, complicating segmentation efforts. To overcome these challenges, the authors propose a comprehensive method integrating feature augmentation into a Res-U-Net architecture, optimizing the model for multiscale building segmentation across spatial resolutions ranging from 0.4m to 2.7m.

The authors employ a carefully curated multi-sensor dataset and augment the RGB input data with derivatives such as Principal Component Analysis (PCA), Visible Difference Vegetation Index (VDVI), Morphological Building Index (MBI), and Sobel edge filters. The overall aim is to enhance the model's ability to learn complex spatial patterns more effectively.

Numerical Results

The proposed model achieves notable accuracy, with an overall accuracy of 96.5%, an F1-score of 0.86, and an Intersection over Union (IoU) score of 0.80 when tested on a WorldView-3 image. These results outperform existing benchmarks in RGB-based building segmentation, illustrating the potential of feature augmentation in improving segmentation outcomes.

Methodological Contributions

The paper advances several key contributions in the field of building segmentation:

  1. Multiscale and Multi-sensor Dataset: It combines diverse imagery sources to address variability in building types and environments, ensuring a more generalized segmentation model.
  2. Feature Augmentation: The approach leverages various derived feature inputs alongside standard RGB data, demonstrating improved model performance, particularly in complex segmentation scenarios.
  3. Optimized Training Strategy: Techniques such as layer freezing, cyclical learning rates, and SuperConvergence are implemented to reduce training time and resource usage while maintaining or enhancing model accuracy.

Implications and Future Developments

The research implications are substantial for urban monitoring and mapping, offering a robust method for building extraction that can be applied to various environmental and urban planning contexts. The fusion of multi-resolution imagery with advanced machine learning techniques could streamline urban analysis in geographic information systems (GIS) and remote sensing applications.

Looking forward, the approach points to continued enhancements in segmentation quality through further integration of auxiliary data sources and more sophisticated feature engineering. Moreover, future work could explore the application of similar architectures to other segmentation tasks beyond buildings, widening the scope of machine learning applications in remote sensing.

The paper presents a significant step towards more accurately and efficiently automating the detection and analysis of urban features, bolstering capabilities in the remote sensing and urban planning fields.