Optimal pre-training strategy for SSM-based EEG foundation models

Determine the optimal self-supervised pre-training strategy for state-space model architectures used in EEG foundation models, in particular clarifying the relative merits of masked reconstruction, contrastive learning, and related objectives for learning transferable and robust EEG representations.

Background

Most EEG foundation models have used masked reconstruction or contrastive learning, but their suitability for state-space models remains unsettled. LeJEPA was recently proposed in vision as a principled alternative emphasizing isotropic embeddings and predictive alignment, yet its application to biosignals is nascent.

LuMamba investigates combinations of masked reconstruction and LeJEPA for EEG, but the paper explicitly notes that the optimal pre-training strategy for SSMs remains unclear.

References

Current EEG foundation models predominantly rely on masked reconstruction or contrastive learning, yet the optimal strategy for SSM architectures remains unclear.

LuMamba: Latent Unified Mamba for Electrode Topology-Invariant and Efficient EEG Modeling  (2603.19100 - Broustail et al., 19 Mar 2026) in Section 1 (Introduction)