A Multimodal fNIRS-EEG Dataset for Unilateral Limb Motor Imagery

A Multimodal fNIRS-EEG Dataset for Unilateral Limb Motor Imagery
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Unilateral limb motor imagery (MI) plays an important role in upper-limb motor rehabilitation and precise control of external devices, and places higher demands on spatial resolution. However, most existing public datasets focus on binary- or four-class left-right limb paradigms that mainly exploit coarse hemispheric lateralization, and there is still a lack of multimodal datasets that simultaneously record EEG and fNIRS for unilateral multi-directional MI. To address this gap, we constructed MIND, a public motor imagery fNIRS-EEG dataset based on a four-class directional MI paradigm of the right upper limb. The dataset includes 64-channel EEG recordings (1000 Hz) and 51-channel fNIRS recordings (47.62 Hz) from 30 participants (12 females, 18 males; aged 19.0-25.0 years). We analyse the spatiotemporal characteristics of EEG spectral power and hemodynamic responses, and validate the potential advantages of hybrid fNIRS-EEG BCIs in terms of classification accuracy. We expect that this dataset will facilitate the evaluation and comparison of neuroimaging analysis and decoding methods.


💡 Research Summary

This paper introduces MIND, a publicly available multimodal dataset that simultaneously records 64‑channel electroencephalography (EEG) and 51‑channel functional near‑infrared spectroscopy (fNIRS) during unilateral upper‑limb motor imagery (MI). Unlike most existing MI datasets that focus on binary or four‑class left‑right hand paradigms and rely solely on EEG, MIND captures four directional MI tasks (horizontal left‑to‑right, vertical up‑to‑down, diagonal upper‑left to lower‑right, and diagonal upper‑right to lower‑left) performed with the right arm. Thirty healthy right‑handed participants (18 M, 12 F, ages 19–25) completed three modules (horizontal, vertical, diagonal) with two sessions per module, yielding a total of 120 trials per subject (30 trials per class) and 3,600 trials overall.

EEG was acquired at 1000 Hz using a 64‑electrode Ag/AgCl cap arranged according to the international 10‑20 system, with impedance kept below 10 kΩ. fNIRS data were collected with a continuous‑wave LABNIRS system at three wavelengths (780, 805, 830 nm), forming 51 source‑detector pairs (30 mm separation) covering frontal, parietal, temporal, and sensorimotor cortices, sampled at 47.62 Hz. Synchronisation between the acquisition system and the experimental software (E‑Prime/E‑Studio) was achieved via digital triggers, ensuring precise temporal alignment of task events.

Pre‑processing followed a standard pipeline: EEG signals were band‑pass filtered (0.5–50 Hz), notch filtered at 50 Hz, bad channels interpolated with spherical splines, and down‑sampled to 250 Hz. Epochs comprised a 2‑second cue, a 10‑second MI execution, and a 10‑second post‑imagery rest, with a –2 s to 0 s baseline correction. fNIRS data were converted to concentration changes of oxy‑ and deoxy‑hemoglobin, motion‑corrected, and low‑pass filtered to remove high‑frequency noise.

Time‑frequency analysis of EEG revealed classic μ‑rhythm (8–13 Hz) desynchronisation over central electrodes (C3, C1, CP3) during MI, with subtle but systematic differences across the four directions. Spatially, horizontal and vertical MI produced stronger desynchronisation in distinct sub‑regions, while diagonal MI induced more distributed patterns. fNIRS showed delayed (≈5–7 s) increases in ΔHbO localized to frontal‑central areas; the peak location shifted slightly depending on the imagined direction, confirming that unilateral multi‑directional MI generates fine‑grained hemodynamic topographies.

Classification experiments compared single‑modal and multimodal approaches. Using linear discriminant analysis (LDA) or support vector machines (SVM) on EEG alone yielded average accuracies around 68 %, while fNIRS alone reached ≈71 %. A feature‑level fusion that concatenated normalized EEG spectral power with fNIRS ΔHbO/ΔHbR features, followed by Random Forest or a 1‑D convolutional neural network, boosted four‑class accuracy to ≈82 %. Cross‑validation demonstrated that the multimodal models were more robust to inter‑session variability, highlighting the complementary nature of high temporal resolution EEG and higher spatial specificity fNIRS.

The authors provide the raw recordings, preprocessing scripts (MNE‑Python for EEG, HOMER2 for fNIRS), and detailed metadata, facilitating reproducibility and enabling researchers to test novel signal‑processing, feature‑extraction, and deep‑learning strategies. By focusing on unilateral, multi‑directional MI, MIND addresses a critical gap for applications that require fine‑grained control such as upper‑limb rehabilitation robotics, prosthetic hand manipulation, and continuous trajectory guidance.

In summary, MIND is a high‑quality, multimodal benchmark that demonstrates the advantage of EEG‑fNIRS fusion for decoding subtle spatial patterns in unilateral motor imagery. Its comprehensive design, thorough validation, and open‑access policy make it a valuable resource for advancing both methodological research and practical brain‑computer interface implementations.


Comments & Academic Discussion

Loading comments...

Leave a Comment