Effectiveness of topology-invariant encoding with SSM backbones

Determine whether LUNA’s topology-invariant learned-query cross-attention for channel unification remains effective when integrated with efficient state-space model backbones such as bidirectional Mamba for EEG sequence modeling across heterogeneous electrode configurations.

Background

EEG datasets vary widely in electrode counts and placements, causing cross-montage degradation. LUNA addresses this by using learned queries and cross-attention to project channels into a fixed latent space, but prior works used Transformers. Mamba-based state-space models provide linear-time temporal modeling and improved efficiency, motivating the question of whether LUNA’s topology-invariant channel unification remains effective when paired with SSM backbones instead of Transformers.

LuMamba combines LUNA’s channel unification with FEMBA’s bidirectional Mamba blocks to empirically study this question, but the paper explicitly states the general question as open.

References

Whether such topology-invariant encoding remains effective when combined with efficient SSM backbones is an open question.

LuMamba: Latent Unified Mamba for Electrode Topology-Invariant and Efficient EEG Modeling  (2603.19100 - Broustail et al., 19 Mar 2026) in Section 1 (Introduction)