Cognitive Inference based Feature Pyramid Network for Sentimental Analysis using EEG Signals
Sentiment analysis using Electroencephalography (EEG) sensor signals provides a deeper behavioral understanding of a person’s emotional state, offering insights into real-time mood fluctuations. This approach takes advantage of brain electrical activity, making it a promising tool for various applications, including mental health monitoring, affective computing, and personalised user experiences. An encoder-based model for EEG-to-sentiment analysis, utilizing the ZUCO 2.0 dataset and incorporating a Feature Pyramid Network (FPN), is proposed to enhance this process. FPNs are adapted here for EEG sensor data, enabling multiscale feature extraction to capture local and global sentiment-related patterns. The raw EEG sensor data from the ZUCO 2.0 dataset is pre-processed and passed through the FPN, which extracts hierarchical features. In addition, extracted features are passed to a Gated Recurrent Unit (GRU) to model temporal dependencies, thereby enhancing the accuracy of sentiment classification. The ZUCO 2.0 dataset is utilized for its clear and detailed representation in 128 channels, offering rich spatial and temporal resolution. The experimental metric results show that the proposed architecture achieves a 6.88% performance gain compared to the existing methods. Furthermore, the proposed framework demonstrated its efficacy on the validation datasets DEAP and SEED.
💡 Research Summary
The paper proposes a novel encoder‑based architecture for EEG‑driven sentiment analysis, leveraging a Feature Pyramid Network (FPN) adapted to multichannel brain signals and a Gated Recurrent Unit (GRU) for temporal modeling. Raw EEG recordings from the ZUCO 2.0 dataset (18 subjects, 128 channels, millisecond resolution) are first band‑pass filtered between 0.5 Hz and 30 Hz, then flattened from a 3‑D tensor (samples × channels × time) into a 2‑D matrix suitable for dense layers.
Instead of the conventional CNN‑based FPN, the authors replace the convolutional backbone with an autoencoder consisting of three encoder and three decoder layers. Each encoder reduces dimensionality (e.g., 128 → 64 → 32 units) while preserving intermediate activations for skip connections with the corresponding decoder, thereby constructing a hierarchical pyramid of feature maps. Down‑sampling is performed via stride‑2 convolutions, and up‑sampling uses padding‑based interpolation followed by element‑wise addition, as described by equations (1) and (2). This design is intended to capture both fine‑grained local patterns and coarse global structures inherent in EEG signals, while also providing robustness to noise through the autoencoder’s latent compression.
The multiscale representations output by the FPN are fed into a GRU module, which models sequential dependencies across time. The GRU’s gating mechanisms (reset, update, and candidate gates) enable efficient learning of long‑range temporal dynamics with fewer parameters than an LSTM, making it suitable for real‑time or resource‑constrained scenarios. The final hidden state is passed through a softmax classifier to predict binary sentiment labels (positive vs. negative).
Experimental evaluation on ZUCO 2.0 shows that the proposed pipeline achieves a 6.88 % improvement in classification accuracy over several baselines, including CNN‑LSTM, CNN‑GRU, and traditional machine‑learning classifiers (SVM, Random Forest). To assess generalization, the model is also tested on two public affective EEG corpora—DEAP and SEED—where it yields modest gains (approximately 4–5 % higher accuracy) compared to the same baselines, indicating that the multiscale FPN + GRU combination can transfer across datasets with differing stimulus protocols.
Despite these promising results, the paper leaves several critical details unspecified. Hyper‑parameters such as learning rate, batch size, optimizer choice, and exact layer dimensions are not reported, hindering reproducibility. The dataset size (18 subjects) is relatively small, and the authors do not present cross‑subject validation or statistical significance testing, raising concerns about overfitting and the robustness of the reported gains. An ablation study is absent, so the individual contributions of the autoencoder‑driven FPN and the GRU cannot be quantified. Moreover, handling of class imbalance, choice of evaluation metrics beyond accuracy (e.g., precision, recall, F1, AUC), and strategies for subject‑specific adaptation are not discussed.
In summary, the work introduces an innovative fusion of multiscale feature pyramids and lightweight recurrent modeling for EEG‑based sentiment analysis, demonstrating measurable performance improvements on both the proprietary ZUCO 2.0 dataset and external benchmarks. However, to move from a proof‑of‑concept toward practical deployment, future research should provide comprehensive architectural specifications, conduct rigorous cross‑validation, explore domain adaptation techniques, and evaluate computational efficiency on real‑time hardware.
Comments & Academic Discussion
Loading comments...
Leave a Comment