Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SCA-Net: A Self-Correcting Two-Layer Autoencoder for Hyper-spectral Unmixing (2102.05713v5)

Published 10 Feb 2021 in cs.LG

Abstract: Hyperspectral unmixing involves separating a pixel as a weighted combination of its constituent endmembers and corresponding fractional abundances, with the current state of the art results achieved by neural models on benchmark datasets. However, these networks are severely over-parameterized and consequently, the invariant endmember spectra extracted as decoder weights have a high variance over multiple runs. These approaches perform substantial post-processing while requiring an exact specification of the number of endmembers and specialized initialization of weights from other algorithms like VCA. We show for the first time that a two-layer autoencoder (SCA), with $2FK$ parameters ($F$ features, $K$ endmembers), achieves error metrics that are scales apart ($10{-5})$ from previously reported values $(10{-2})$. SCA converges to this low error solution starting from a random initialization of weights. We also show that SCA, based upon a bi-orthogonal representation, performs a self-correction when the number of endmembers are over-specified. Numerical experiments on Samson, Jasper, and Urban datasets demonstrate that SCA outperforms previously reported error metrics for all the cases while being robust to noise and outliers.

Citations (2)

Summary

We haven't generated a summary for this paper yet.