Adoption of autoregressive self-supervised pretraining for EEG foundation models
Determine whether autoregressive sequence modeling paradigms (e.g., GPT-style autoregressive pretraining that predicts future electroencephalography segments from past context) will be adopted for pretraining EEG foundation models and ascertain their feasibility for learning robust EEG representations from multichannel EEG data.
References
Interestingly, despite the success of autoregressive models in LLMs (Raiaan et al., 2024), they are not popular in EEG. It remains open if future work will adopt this direction.
— Systematic review of self-supervised foundation models for brain network representation using electroencephalography
(2602.03269 - Portmann et al., 3 Feb 2026) in Discussion