Effectiveness of topology-invariant encoding with SSM backbones
Determine whether LUNA’s topology-invariant learned-query cross-attention for channel unification remains effective when integrated with efficient state-space model backbones such as bidirectional Mamba for EEG sequence modeling across heterogeneous electrode configurations.
References
Whether such topology-invariant encoding remains effective when combined with efficient SSM backbones is an open question.
— LuMamba: Latent Unified Mamba for Electrode Topology-Invariant and Efficient EEG Modeling
(2603.19100 - Broustail et al., 19 Mar 2026) in Section 1 (Introduction)