Transformer based Endmember Fusion with Spatial Context for Hyperspectral Unmixing
Abstract: In recent years, transformer-based deep learning networks have gained popularity in Hyperspectral (HS) unmixing applications due to their superior performance. The attention mechanism within transformers facilitates input-dependent weighting and enhances contextual awareness during training. Drawing inspiration from this, we propose a novel attention-based Hyperspectral Unmixing algorithm called Transformer-based Endmember Fusion with Spatial Context for Hyperspectral Unmixing (FusionNet). This network leverages an ensemble of endmembers for initial guidance, effectively addressing the issue of relying on a single initialization. This approach helps avoid suboptimal results that many algorithms encounter due to their dependence on a singular starting point. The FusionNet incorporates a Pixel Contextualizer (PC), introducing contextual awareness into abundance prediction by considering neighborhood pixels. Unlike Convolutional Neural Networks (CNNs) and traditional Transformer-based approaches, which are constrained by specific kernel or window shapes, the Fusion network offers flexibility in choosing any arbitrary configuration of the neighborhood. We conducted a comparative analysis between the FusionNet algorithm and eight state-of-the-art algorithms using three widely recognized real datasets and one synthetic dataset. The results demonstrate that FusionNet offers competitive performance compared to the other algorithms.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.