Optimal pre-training strategy for SSM-based EEG foundation models
Determine the optimal self-supervised pre-training strategy for state-space model architectures used in EEG foundation models, in particular clarifying the relative merits of masked reconstruction, contrastive learning, and related objectives for learning transferable and robust EEG representations.
References
Current EEG foundation models predominantly rely on masked reconstruction or contrastive learning, yet the optimal strategy for SSM architectures remains unclear.
— LuMamba: Latent Unified Mamba for Electrode Topology-Invariant and Efficient EEG Modeling
(2603.19100 - Broustail et al., 19 Mar 2026) in Section 1 (Introduction)