Papers
Topics
Authors
Recent
Search
2000 character limit reached

Transformer based Endmember Fusion with Spatial Context for Hyperspectral Unmixing

Published 6 Feb 2024 in eess.IV | (2402.03835v3)

Abstract: In recent years, transformer-based deep learning networks have gained popularity in Hyperspectral (HS) unmixing applications due to their superior performance. The attention mechanism within transformers facilitates input-dependent weighting and enhances contextual awareness during training. Drawing inspiration from this, we propose a novel attention-based Hyperspectral Unmixing algorithm called Transformer-based Endmember Fusion with Spatial Context for Hyperspectral Unmixing (FusionNet). This network leverages an ensemble of endmembers for initial guidance, effectively addressing the issue of relying on a single initialization. This approach helps avoid suboptimal results that many algorithms encounter due to their dependence on a singular starting point. The FusionNet incorporates a Pixel Contextualizer (PC), introducing contextual awareness into abundance prediction by considering neighborhood pixels. Unlike Convolutional Neural Networks (CNNs) and traditional Transformer-based approaches, which are constrained by specific kernel or window shapes, the Fusion network offers flexibility in choosing any arbitrary configuration of the neighborhood. We conducted a comparative analysis between the FusionNet algorithm and eight state-of-the-art algorithms using three widely recognized real datasets and one synthetic dataset. The results demonstrate that FusionNet offers competitive performance compared to the other algorithms.

Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.