ManifoldFormer: Geometric Deep Learning for Neural Dynamics on Riemannian Manifolds

Reading time: 5 minute
...

📝 Abstract

Existing EEG foundation models mainly treat neural signals as generic time series in Euclidean space, ignoring the intrinsic geometric structure of neural dynamics that constrains brain activity to low-dimensional manifolds. This fundamental mismatch between model assumptions and neural geometry limits representation quality and cross-subject generalization. ManifoldFormer addresses this limitation through a novel geometric deep learning framework that explicitly learns neural manifold representations. The architecture integrates three key innovations: a Riemannian VAE for manifold embedding that preserves geometric structure, a geometric Transformer with geodesic-aware attention mechanisms operating directly on neural manifolds, and a dynamics predictor leveraging neural ODEs for manifold-constrained temporal evolution. Extensive evaluation across four public datasets demonstrates substantial improvements over state-of-the-art methods, with 4.6-4.8% higher accuracy and 6.2-10.2% higher Cohen’s Kappa, while maintaining robust cross-subject generalization. The geometric approach reveals meaningful neural patterns consistent with neurophysiological principles, establishing geometric constraints as essential for effective EEG foundation models.

💡 Analysis

Existing EEG foundation models mainly treat neural signals as generic time series in Euclidean space, ignoring the intrinsic geometric structure of neural dynamics that constrains brain activity to low-dimensional manifolds. This fundamental mismatch between model assumptions and neural geometry limits representation quality and cross-subject generalization. ManifoldFormer addresses this limitation through a novel geometric deep learning framework that explicitly learns neural manifold representations. The architecture integrates three key innovations: a Riemannian VAE for manifold embedding that preserves geometric structure, a geometric Transformer with geodesic-aware attention mechanisms operating directly on neural manifolds, and a dynamics predictor leveraging neural ODEs for manifold-constrained temporal evolution. Extensive evaluation across four public datasets demonstrates substantial improvements over state-of-the-art methods, with 4.6-4.8% higher accuracy and 6.2-10.2% higher Cohen’s Kappa, while maintaining robust cross-subject generalization. The geometric approach reveals meaningful neural patterns consistent with neurophysiological principles, establishing geometric constraints as essential for effective EEG foundation models.

📄 Content

MANIFOLDFORMER: GEOMETRIC DEEP LEARNING FOR NEURAL DYNAMICS ON RIEMANNIAN MANIFOLDS Yihang Fu⋆ Lifang He† Qingyu Chen‡ ⋆Health Informatics, School of Public Health, Yale University, New Haven, USA † Department of Computer Science and Engineering, Lehigh University, Bethlehem, USA ‡ Department of Biomedical Informatics and Data Science, School of Medicine, Yale University, New Haven, USA ABSTRACT Existing EEG foundation models mainly treat neural signals as generic time series in Euclidean space, ignoring the in- trinsic geometric structure of neural dynamics that constrains brain activity to low-dimensional manifolds. This fundamental mismatch between model assumptions and neural geometry limits representation quality and cross-subject generalization. ManifoldFormer addresses this limitation through a novel geo- metric deep learning framework that explicitly learns neural manifold representations. The architecture integrates three key innovations: a Riemannian VAE for manifold embedding that preserves geometric structure, a geometric Transformer with geodesic-aware attention mechanisms operating directly on neural manifolds, and a dynamics predictor leveraging neural ODEs for manifold-constrained temporal evolution. Extensive evaluation across four public datasets demonstrates substantial improvements over state-of-the-art methods, with 4.6-4.8% higher accuracy and 6.2-10.2% higher Cohen’s Kappa, while maintaining robust cross-subject generalization. The geomet- ric approach reveals meaningful neural patterns consistent with neurophysiological principles, establishing geometric constraints as essential for effective EEG foundation models. Index Terms— EEG foundation models, geometric deep learning, Riemannian manifolds, neural dynamics, brain- computer interfaces

  1. INTRODUCTION Electroencephalography (EEG) foundation models have achieved remarkable progress through large-scale self-supervised learning, yet they fundamentally treat neural signals as generic time series in Euclidean space. This approach ignores a crucial neurobiological principle: neural activity is constrained to low-dimensional dynamical manifolds that encode cognitive states and motor intentions. Existing architectures such as BENDR [1], EEGFormer [2], and CBraMod [3] employ stan- dard attention mechanisms with Euclidean distance metrics, leading to a fundamental mismatch between model assump- tions and the intrinsic geometry of neural dynamics. This limitation severely restricts representation quality, hinders cross-subject transferability, and prevents models from cap- turing the smooth, continuous neural state transitions that characterize brain function. We introduce ManifoldFormer, a novel architectural frame- work for EEG foundation models that explicitly models neural signals on Riemannian manifolds. The architecture integrates three cascaded innovations: a Riemannian variational autoen- coder (VAE) that learns compact manifold embeddings while preserving local geometric structure through hypersphere and hyperbolic projections; a geometric Transformer that replaces replaces Euclidean attention with geodesic-aware mechanisms that compute attention weights using manifold distances; and a dynamics predictor that employs neural ODEs with mani- fold constraints to model smooth neural state evolution. This geometric formulation naturally accommodates the curved structure of neural state spaces, enables robust cross-subject alignment through Procrustes transformations, and captures the continuous dynamics underlying EEG signals. Extensive evaluation demonstrates substantial improvements across mo- tor imagery and emotion recognition tasks, confirming that geometric constraints are essential for building effective neural foundation models. 1.1. EEG Foundation Models and Architectural Innova- tions Recent EEG foundation models have achieved breakthrough performance through large-scale self-supervised pre-training on heterogeneous datasets. For example, FoME [4] introduced adaptive temporal-lateral attention scaling trained on 1.7TB of diverse EEG data, while EEGFormer [5] employed vector quantization for interpretable discrete representations. More recent developments have explored alternative architectures: EEGMamba [6] leveraged state space models for improved sequence modeling, CBraMod [3] captured heterogeneous spatial-temporal dependencies using criss-cross transformers with parallel attention mechanisms, and the Large Cognition Model [7] integrated temporal and spectral attention to en- hance generalization. arXiv:2511.16828v1 [cs.LG] 20 Nov 2025 Fig. 1. ManifoldFormer architecture overview showing the three-stage pipeline from EEG input to manifold learning and dynamics prediction. Earlier foundation models established important prece- dents. BrainBERT [8] adapted transformers for stereoelec- troencephalographic data, LaBraM [9] introduced neural to- kenization, and Neuro-GPT [10] demonstrated the potential of generative pre-traini

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut