HEDN: A Hard-Easy Dual Network with Source Reliability Assessment for Cross-Subject EEG Emotion Recognition
Cross-subject electroencephalography (EEG) emotion recognition remains a major challenge in brain-computer interfaces (BCIs) due to substantial inter-subject variability. Multi-Source Domain Adaptation (MSDA) offers a potential solution, but existing MSDA frameworks typically assume equal source quality, leading to negative transfer from low-reliability domains and prohibitive computational overhead due to multi-branch model designs. To address these limitations, we propose the Hard-Easy Dual Network (HEDN), a lightweight reliability-aware MSDA framework. HEDN introduces a novel Source Reliability Assessment (SRA) mechanism that dynamically evaluates the structural integrity of each source domain during training. Based on this assessment, sources are routed to two specialized branches: an Easy Network that exploits high-quality sources to construct fine-grained, structure-aware prototypes for reliable pseudo-label generation, and a Hard Network that utilizes adversarial training to refine and align low-quality sources. Furthermore, a cross-network consistency loss aligns predictions between branches to preserve semantic coherence. Extensive experiments conducted on SEED, SEED-IV, and DEAP datasets demonstrate that HEDN achieves state-of-the-art performance across both cross-subject and cross-dataset evaluation protocols while reducing adaptation complexity.
💡 Research Summary
The paper “HEDN: A Hard-Easy Dual Network with Source Reliability Assessment for Cross-Subject EEG Emotion Recognition” addresses a fundamental challenge in affective brain-computer interfaces (BCIs): the significant inter-subject variability in electroencephalography (EEG) signals that hinders model generalization. While Multi-Source Domain Adaptation (MSDA), which treats each subject as a separate source domain, is a promising solution, existing frameworks suffer from two critical flaws. They typically assume all source domains are of equal quality, leading to negative transfer from unreliable sources, and they often employ multi-branch architectures that scale computational overhead with the number of sources.
To overcome these limitations, the authors propose the Hard-Easy Dual Network (HEDN), a novel, lightweight, and reliability-aware MSDA framework. HEDN’s core innovation is its dynamic, assessment-driven approach to processing source domains, moving away from the paradigm of uniform treatment.
The framework is built around three key components. First, the Source Reliability Assessment (SRA) module dynamically evaluates the structural integrity of each source domain during training. It uses the classification cross-entropy loss from a shared classifier as a simple yet effective proxy for reliability: a lower loss indicates clearer class boundaries and higher structural quality. At each training iteration, the source with the highest reliability score is designated the “Easy Source,” and the one with the lowest score is the “Hard Source.”
Second, based on this assessment, sources are routed to one of two specialized network branches. The Easy Network processes the high-reliability Easy Source. It capitalizes on the well-defined, multi-cluster structures often present within a subject’s EEG data for the same emotion. It constructs fine-grained, cluster-level prototypes from the source data. These detailed prototypes are then used to generate high-confidence pseudo-labels for target domain samples via a matching process. To further refine these representations, the Easy Network employs cluster-level contrastive learning in a dedicated embedding space, explicitly enforcing intra-cluster compactness and inter-cluster separation, thereby enhancing the accuracy and robustness of the pseudo-labels.
Conversely, the Hard Network handles the low-reliability Hard Source. Instead of relying on its potentially noisy structure for prototype matching, this branch focuses on refining the source’s representations through adversarial training. A domain discriminator is trained to distinguish between the Hard Source and the target domain, while the feature extractor is trained to confuse it. This adversarial process helps align the distribution of the low-quality source with the target, gradually improving its transferability and robustness against noise and ambiguity.
Third, to maintain semantic coherence between the two branches learning from vastly different quality of data, a Cross-Network Consistency Loss is introduced. This loss constrains the prediction outputs of the Hard Network to align with the high-confidence pseudo-labels generated by the Easy Network. This ensures that while the Hard Network learns to extract useful information from challenging sources, the final decision logic remains anchored to the stable structural knowledge provided by the reliable Easy Source.
Extensive experiments on three benchmark datasets (SEED, SEED-IV, and DEAP) under both cross-subject and cross-dataset evaluation protocols demonstrate that HEDN consistently outperforms state-of-the-art single-source and multi-source domain adaptation methods in recognition accuracy. Crucially, HEDN achieves this superior performance while significantly reducing adaptation complexity. Unlike traditional MSDA methods whose parameters grow with the number of sources, HEDN maintains a fixed two-branch architecture regardless of the source count, making it highly scalable and computationally efficient. The results validate HEDN’s effectiveness in mitigating negative transfer and its strong potential for practical deployment in real-world BCI systems requiring robust cross-subject generalization.
Comments & Academic Discussion
Loading comments...
Leave a Comment