This paper was accepted on the Basis Fashions for the Mind and Physique workshop at NeurIPS 2025.
Self-supervised studying (SSL) provides a promising method for studying electroencephalography (EEG) representations from unlabeled knowledge, lowering the necessity for costly annotations for scientific functions like sleep staging and seizure detection. Whereas present EEG SSL strategies predominantly use masked reconstruction methods like masked autoencoders (MAE) that seize native temporal patterns, place prediction pretraining stays underexplored regardless of its potential to be taught long-range dependencies in neural alerts. We introduce PAirwise Relative Shift or PARS pretraining, a novel pretext activity that predicts relative temporal shifts between randomly sampled EEG window pairs. Not like reconstruction-based strategies that target native sample restoration, PARS encourages encoders to seize relative temporal composition and long-range dependencies inherent in neural alerts. By means of complete analysis on varied EEG decoding duties, we display that PARS-pretrained transformers constantly outperform present pretraining methods in label-efficient and switch studying settings, establishing a brand new paradigm for self-supervised EEG illustration studying.
**Work performed throughout an Apple internship
†Stanford College
‡California Institute of Know-how
§College of Amsterdam

