HFMCA: Orthonormal Feature Learning for EEG-based Brain Decoding

HFMCA: Orthonormal Feature Learning for EEG-based Brain Decoding
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Electroencephalography (EEG) analysis is critical for brain-computer interfaces and neuroscience, but the intrinsic noise and high dimensionality of EEG signals hinder effective feature learning. We propose a self-supervised framework based on the Hierarchical Functional Maximal Correlation Algorithm (HFMCA), which learns orthonormal EEG representations by enforcing feature decorrelation and reducing redundancy. This design enables robust capture of essential brain dynamics for various EEG recognition tasks. We validate HFMCA on two benchmark datasets, SEED and BCIC-2A, where pretraining with HFMCA consistently outperforms competitive self-supervised baselines, achieving notable gains in classification accuracy. Across diverse EEG tasks, our method demonstrates superior cross-subject generalization under leave-one-subject-out validation, advancing state-of-the-art by 2.71% on SEED emotion recognition and 2.57% on BCIC-2A motor imagery classification.


💡 Research Summary

This paper introduces a novel self‑supervised learning framework for electroencephalography (EEG) called Hierarchical Functional Maximal Correlation Algorithm (HFMCA) and its enhanced variant HFMCA++. The authors address two fundamental challenges in EEG analysis: the high dimensionality of multi‑channel recordings and the extremely low signal‑to‑noise ratio (SNR) that makes supervised deep learning difficult due to the scarcity of labeled data.

The core idea of HFMCA is to generate multiple augmented views of each raw EEG segment (using channel permutation, channel dropout, temporal masking, and temporal crop‑and‑resize) and to process each view with a shared encoder fθ, producing a set of low‑level representations S = {S1,…,ST}. These low‑level features are concatenated and fed into a projection head gω, yielding a single high‑level representation Z that aggregates information across all views.

Statistical dependence between the collection of low‑level features S and the aggregated high‑level representation Z is quantified using Functional Maximal Correlation Analysis (FMCA). FMCA expresses the density ratio ρ̂(S,Z) as an infinite sum of eigenvalues σk and orthogonal eigenfunctions ϕk(S), ψk(Z) derived from the cross‑covariance operator in reproducing kernel Hilbert spaces. Practically, the method computes three correlation matrices: R1 = E


Comments & Academic Discussion

Loading comments...

Leave a Comment