Structured Prototype-Guided Adaptation for EEG Foundation Models
Electroencephalography (EEG) foundation models (EFMs) have achieved strong performance under full fine-tuning but exhibit poor generalization when subject-level supervision is limited, a common constraint in real-world clinical settings. We show that this failure stems not merely from limited supervision, but from a structural mismatch between noisy, limited supervision and the highly plastic parameter space of EFMs. To address this challenge, we propose SCOPE, a Structured COnfidence-aware Prototype-guided adaptation framework for EFM fine-tuning. SCOPE follows a two-stage pipeline. In the first stage, we construct reliable external supervision by learning geometry-regularized task priors, constructing balanced class-level prototypes over the resulting embeddings, and producing confidence-aware pseudo-labels from their agreement to filter unreliable signals on unlabeled data. In the second stage, we introduce ProAdapter, which adapts frozen EEG foundation models via a lightweight adapter conditioned on the structured prototypes. Experiments across three EEG tasks and five foundation model backbones demonstrate that SCOPE consistently achieves strong performance and efficiency under label-limited cross-subject settings.
💡 Research Summary
The paper tackles a critical problem in EEG foundation model (EFM) adaptation: when only a small number of subjects are labeled—a common situation in clinical practice—full fine‑tuning of large pre‑trained models becomes unstable, over‑confident, and fails to generalize. The authors argue that the issue is not merely data scarcity but a structural mismatch: noisy, limited supervision interacts poorly with the highly plastic parameter space of EFMs, causing the models to overfit subject‑specific patterns and ignore the true task‑level class structure.
To solve this, they propose SCOPE (Structured Confidence‑aware Prototype‑guided adaptation), a two‑stage framework.
Stage I – External Structured Supervision Construction
- Task‑Prior Network (TPN) – a lightweight encoder‑classifier trained on the few labeled samples. A geometric regularizer based on an Equiangular Tight Frame (ETF) forces the classifier weights to be maximally angularly separated, yielding cluster‑friendly embeddings.
- Prototype Bank – using the TPN embeddings, the method builds a balanced set of class‑wise prototypes (M prototypes per class) via Sinkhorn‑Knopp constrained clustering. This captures intra‑class variability while preserving inter‑class separation.
- Confidence‑aware Fusion – pseudo‑labels are generated only when the TPN’s hard prediction and the prototype‑based prediction agree. The two sources are treated as belief functions and combined using Dempster‑Shafer theory; an entropy‑based confidence score γ is derived from the fused belief. Samples with low γ are discarded, providing reliable supervision for the unlabeled pool.
Stage II – Prototype‑Conditioned Adaptation (ProAdapter)
The frozen EFM backbone is left untouched; instead, lightweight adapters are inserted into the deeper layers. Each adapter receives the prototype vectors as conditioning inputs and modulates the intermediate representations accordingly. Training uses the pseudo‑labels from Stage I, weighted by their confidence scores, and only the adapter parameters are updated. This constrains the gradient flow to respect the external class structure, preserving the pretrained knowledge while adapting to the target task.
Experiments cover three EEG tasks (sleep staging, affective assessment, BCI command recognition) and five modern backbone architectures (various Transformers, state‑space models, ConvNeXt‑style networks). Under label‑limited cross‑subject settings (10‑30 % of subjects labeled), SCOPE consistently outperforms full fine‑tuning and other parameter‑efficient fine‑tuning baselines, achieving 6‑9 % absolute gains in Kappa/accuracy while adding less than 0.5 % extra parameters. It also shows smoother loss curves, reduced sensitivity to random seeds, and better calibration (lower over‑confidence).
Key contributions
- An external supervision pipeline that combines ETF‑regularized inter‑class geometry with balanced prototype clustering, delivering structured, class‑level guidance.
- A Dempster‑Shafer‑based confidence fusion that filters unreliable pseudo‑labels and provides sample‑wise weighting.
- A prototype‑conditioned adapter design that enables stable, parameter‑efficient adaptation of large EEG foundation models.
The work demonstrates that incorporating structured, confidence‑aware external signals can bridge the gap between limited subject‑level labels and the high‑capacity EFMs, offering a practical solution for real‑world clinical EEG deployment. Future directions include extending the approach to regression‑type brain state estimation and exploring other bio‑signal modalities.
Comments & Academic Discussion
Loading comments...
Leave a Comment