Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Attention-Fused Network for Semantic Segmentation of Very-High-Resolution Remote Sensing Imagery (2105.04132v2)

Published 10 May 2021 in cs.CV

Abstract: Semantic segmentation is an essential part of deep learning. In recent years, with the development of remote sensing big data, semantic segmentation has been increasingly used in remote sensing. Deep convolutional neural networks (DCNNs) face the challenge of feature fusion: very-high-resolution remote sensing image multisource data fusion can increase the network's learnable information, which is conducive to correctly classifying target objects by DCNNs; simultaneously, the fusion of high-level abstract features and low-level spatial features can improve the classification accuracy at the border between target objects. In this paper, we propose a multipath encoder structure to extract features of multipath inputs, a multipath attention-fused block module to fuse multipath features, and a refinement attention-fused block module to fuse high-level abstract features and low-level spatial features. Furthermore, we propose a novel convolutional neural network architecture, named attention-fused network (AFNet). Based on our AFNet, we achieve state-of-the-art performance with an overall accuracy of 91.7% and a mean F1 score of 90.96% on the ISPRS Vaihingen 2D dataset and an overall accuracy of 92.1% and a mean F1 score of 93.44% on the ISPRS Potsdam 2D dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xuan Yang (49 papers)
  2. Shanshan Li (54 papers)
  3. Zhengchao Chen (5 papers)
  4. Jocelyn Chanussot (89 papers)
  5. Xiuping Jia (16 papers)
  6. Bing Zhang (435 papers)
  7. Baipeng Li (3 papers)
  8. Pan Chen (22 papers)
Citations (110)

Summary

We haven't generated a summary for this paper yet.