Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hyperspectral Unmixing Network Inspired by Unfolding an Optimization Problem (2005.10856v2)

Published 21 May 2020 in eess.IV and cs.LG

Abstract: The hyperspectral image (HSI) unmixing task is essentially an inverse problem, which is commonly solved by optimization algorithms under a predefined (non-)linear mixture model. Although these optimization algorithms show impressive performance, they are very computational demanding as they often rely on an iterative updating scheme. Recently, the rise of neural networks has inspired lots of learning based algorithms in unmixing literature. However, most of them lack of interpretability and require a large training dataset. One natural question then arises: can one leverage the model based algorithm and learning based algorithm to achieve interpretable and fast algorithm for HSI unmixing problem? In this paper, we propose two novel network architectures, named U-ADMM-AENet and U-ADMM-BUNet, for abundance estimation and blind unmixing respectively, by combining the conventional optimization-model based unmixing method and the rising learning based unmixing method. We first consider a linear mixture model with sparsity constraint, then we unfold Alternating Direction Method of Multipliers (ADMM) algorithm to construct the unmixing network structures. We also show that the unfolded structures can find corresponding interpretations in machine learning literature, which further demonstrates the effectiveness of proposed methods. Benefit from the interpretation, the proposed networks can be initialized by incorporating prior information about the HSI data. Different from traditional unfolding networks, we propose a new training strategy for proposed networks to better fit in the HSI applications. Extensive experiments show that the proposed methods can achieve much faster convergence and competitive performance even with very small size of training data, when compared with state-of-art algorithms.

Summary

We haven't generated a summary for this paper yet.