Adoption of autoregressive self-supervised pretraining for EEG foundation models

Determine whether autoregressive sequence modeling paradigms (e.g., GPT-style autoregressive pretraining that predicts future electroencephalography segments from past context) will be adopted for pretraining EEG foundation models and ascertain their feasibility for learning robust EEG representations from multichannel EEG data.

Background

Within the surveyed literature, most EEG foundation models rely on masked auto-encoding objectives for self-supervised pretraining, while autoregressive approaches—highly successful in LLMs—are rarely used in EEG. The authors highlight this discrepancy and explicitly note the uncertainty about whether the autoregressive direction will be taken up in EEG modeling.

This uncertainty reflects broader methodological choices in the field, where the balance between contrastive, masked reconstruction, and autoregressive objectives remains unsettled. Establishing whether autoregressive objectives can be effectively integrated into EEG pretraining would clarify the space of viable SSL strategies for whole-brain representation learning.

References

Interestingly, despite the success of autoregressive models in LLMs (Raiaan et al., 2024), they are not popular in EEG. It remains open if future work will adopt this direction.