EEG2GAIT: A Hierarchical Graph Convolutional Network for EEG-based Gait Decoding
Decoding gait dynamics from EEG signals presents significant challenges due to the complex spatial dependencies of motor processes, the need for accurate temporal and spectral feature extraction, and the scarcity of high-quality gait EEG datasets. To address these issues, we propose EEG2GAIT, a novel hierarchical graph-based model that captures multi-level spatial embeddings of EEG channels using a Hierarchical Graph Convolutional Network (GCN) Pyramid. To further improve decoding performance, we introduce a Hybrid Temporal-Spectral Reward (HTSR) loss function, which integrates time-domain, frequency-domain, and reward-based loss components. In addition, we contribute a new Gait-EEG Dataset (GED), consisting of synchronized EEG and lower-limb joint angle data collected from 50 participants across two laboratory visits. Extensive experiments demonstrate that EEG2GAIT with HTSR achieves superior performance on the GED dataset, reaching a Pearson correlation coefficient (r) of 0.959, a coefficient of determination of 0.914, and a Mean Absolute Error (MAE) of 0.193. On the MoBI dataset, EEG2GAIT likewise consistently outperforms existing methods, achieving an r of 0.779, a coefficient of determination of 0.597, and an MAE of 4.384. Statistical analyses confirm that these improvements are significant compared to all prior models. Ablation studies further validate the contributions of the hierarchical GCN modules and the proposed HTSR loss, while saliency analysis highlights the involvement of motor-related brain regions in decoding tasks. Collectively, these findings underscore EEG2GAIT’s potential for advancing brain-computer interface applications, particularly in lower-limb rehabilitation and assistive technologies.
💡 Research Summary
EEG2GAIT introduces a novel deep‑learning framework for decoding lower‑limb gait dynamics directly from scalp electroencephalography (EEG). The authors identify two major shortcomings in prior work: (1) most graph‑based EEG models rely on a single, static adjacency matrix, which cannot capture the dynamic functional connectivity that changes across gait cycles; and (2) conventional regression losses (MSE, MAE) focus on large errors and provide little gradient signal for well‑predicted samples, thereby neglecting subtle variations that are crucial for accurate gait reconstruction.
To overcome these issues, EEG2GAIT combines a Hierarchical Graph Convolutional Network Pyramid (HGP) with a Hybrid Temporal‑Spectral Reward (HTSR) loss. The HGP consists of two stacked graph encoders, each learning its own adjacency matrix that is updated during training. The first encoder captures fine‑grained, local channel relationships, while the second models higher‑level, global brain‑network topology. By fusing the outputs of both encoders, the model obtains multi‑scale spatial embeddings that reflect both regional and whole‑brain motor activity.
The HTSR loss is a weighted sum of three components: (i) a standard MSE term that penalizes time‑domain deviations, (ii) a Time‑Frequency loss that compares short‑time Fourier spectra of predicted and true joint‑angle trajectories (thereby encouraging the network to respect the spectral signatures of gait, such as β‑ and µ‑rhythms), and (iii) a Reward loss that explicitly boosts gradients for samples that are already well‑predicted, preventing the network from ignoring low‑amplitude but physiologically meaningful fluctuations.
The overall architecture proceeds as follows: a Local Temporal Learner (1‑D convolutions) extracts channel‑wise temporal features; the Graph Construction Module converts these features into graph representations, initializing adjacency based on inter‑electrode distances but allowing them to be learned. The HGP processes the graphs, after which a Global Spatial Learner (depth‑wise convolutions) captures whole‑head spatial patterns. Feature Fusion layers combine spatial and temporal streams, followed by a Global Temporal Learner that employs multi‑head self‑attention to model long‑range dependencies across the entire gait window. Finally, a constrained‑weight output layer regresses the fused representation to joint‑angle trajectories.
Experiments were conducted on two datasets. The newly released Gait‑EEG Dataset (GED) contains synchronized EEG (100 Hz) and lower‑limb joint angles from 50 participants across two laboratory visits, providing a high‑quality benchmark. The public MoBI dataset serves as an external validation set. EEG2GAIT achieved Pearson r = 0.959, R² = 0.914, MAE = 0.193 on GED, and r = 0.779, R² = 0.597, MAE = 4.384 on MoBI—substantially outperforming state‑of‑the‑art baselines such as LSTM, CNN‑based regressors, and single‑GCN models. Statistical tests (p < 0.01) confirm the significance of these gains.
Ablation studies reveal the contribution of each component: removing the hierarchical GCN reduces r by 0.03–0.07; disabling dynamic adjacency updates lowers R² by ~0.05; replacing HTSR with pure MSE increases MAE by 12–18 %. Saliency map analysis highlights the involvement of motor‑related cortical regions (frontal, sensorimotor, and parietal areas) and the β‑band, aligning the model’s attention with established neurophysiological findings.
The paper’s primary contributions are threefold: (1) a hierarchical graph‑pyramid that learns multi‑scale spatial embeddings from EEG, (2) the HTSR loss that jointly optimizes time, frequency, and reward objectives for continuous regression, and (3) the release of a large, well‑annotated gait‑EEG dataset. Limitations include the current focus on 100 Hz EEG (requiring adaptation for higher‑resolution recordings), the need for model compression for real‑time embedded deployment, and the fact that joint‑angle prediction is performed per‑joint rather than as a full 3‑D kinematic chain.
Overall, EEG2GAIT sets a new benchmark for EEG‑based gait decoding, offering a robust, neurophysiologically grounded approach that can be extended to other continuous biosignal regression tasks and to brain‑computer interface applications such as lower‑limb rehabilitation exoskeletons, assistive gait devices, and neuro‑feedback systems.
Comments & Academic Discussion
Loading comments...
Leave a Comment