Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Classification of Hyperspectral and LiDAR Data Using Coupled CNNs (2002.01144v1)

Published 4 Feb 2020 in cs.CV and eess.IV

Abstract: In this paper, we propose an efficient and effective framework to fuse hyperspectral and Light Detection And Ranging (LiDAR) data using two coupled convolutional neural networks (CNNs). One CNN is designed to learn spectral-spatial features from hyperspectral data, and the other one is used to capture the elevation information from LiDAR data. Both of them consist of three convolutional layers, and the last two convolutional layers are coupled together via a parameter sharing strategy. In the fusion phase, feature-level and decision-level fusion methods are simultaneously used to integrate these heterogeneous features sufficiently. For the feature-level fusion, three different fusion strategies are evaluated, including the concatenation strategy, the maximization strategy, and the summation strategy. For the decision-level fusion, a weighted summation strategy is adopted, where the weights are determined by the classification accuracy of each output. The proposed model is evaluated on an urban data set acquired over Houston, USA, and a rural one captured over Trento, Italy. On the Houston data, our model can achieve a new record overall accuracy of 96.03%. On the Trento data, it achieves an overall accuracy of 99.12%. These results sufficiently certify the effectiveness of our proposed model.

Citations (225)

Summary

  • The paper introduces a novel coupled CNN framework that fuses hyperspectral and LiDAR data for enhanced feature extraction.
  • It employs strategic parameter sharing between two CNNs via coupled convolutional layers to reduce complexity.
  • Experimental results reached 96.03% and 99.12% accuracy on Houston and Trento datasets, outperforming previous methods.

Classification of Hyperspectral and LiDAR Data Using Coupled CNNs

The paper "Classification of Hyperspectral and LiDAR Data Using Coupled CNNs" presents a framework for efficiently fusing hyperspectral and LiDAR data via coupled convolutional neural networks (CNNs). This research addresses two primary challenges in remote sensing: effective data fusion and limited training samples.

Methodology and Framework

A novel approach utilizing two coupled CNNs was introduced. The first CNN is dedicated to extracting spectral-spatial features from hyperspectral data employing three convolutional layers, while the second CNN captures elevation information present in LiDAR data, with identical architectural parameters. A strategic coupling of the last two convolutional layers ensures parameter sharing, thus promoting inter-network learning and reducing the parameter space significantly. This sharing mechanism harmonizes the feature learning processes between the networks, improving the feature fusion phase.

The fusion of features occurs at both feature-level and decision-level. Three fusion strategies—concatenation, maximization, and summation—were assessed for feature-level fusion, while decision-level fusion employed a weighted summation strategy based on classification accuracy from individual outputs.

Experiments and Results

The performance of the proposed model was substantiated through experiments using urban and rural datasets from Houston, USA, and Trento, Italy, respectively. The results evidence the model’s superior classification accuracy, achieving an overall accuracy of 96.03% on the Houston dataset and 99.12% on the Trento dataset. These metrics surpass existing records in the literature for these datasets.

Analytical Perspectives

The coupling of CNNs facilitates learning from mutual data sources, thereby optimizing feature extraction for each data type and propagating learned paradigms between networks. This method not only reduces computational complexity but also capitalizes on the rich spectral, spatial, and elevation data for improved classification outcomes. By employing both feature-level and decision-level fusion, the framework achieves a higher degree of data integration than traditional approaches.

Implications and Future Directions

This research contributes to the field of remote sensing by demonstrating the efficacy of shared neural architectures in multisensor data fusion and classification. Practically, the approach can be extended to various remote sensing applications such as land cover mapping, urban planning, and environmental monitoring. Looking forward, future works could focus on refining the network architecture for even larger-scale datasets and exploring other deep learning paradigms like attention mechanisms or transformers for enhanced feature understanding.

Overall, this paper delineates an advanced, resource-efficient method for enhancing the accuracy of hyperspectral and LiDAR data classification, opening pathways to further academic inquiry and applied technological evolution in geospatial data analysis.