脑波编码千种表征:基于皮层间神经交互建模的高效脑电情绪识别
A Brain Wave Encodes a Thousand Tokens: Modeling Inter-Cortical Neural Interactions for Effective EEG-based Emotion Recognition
November 17, 2025
作者: Nilay Kumar, Priyansh Bhandari, G. Maragatham
cs.AI
摘要
人类情感难以通过语言准确传达,且常在表达过程中被抽象化;而脑电图信号能为情绪脑活动提供更直接的观测窗口。近期研究表明,深度学习模型能处理这些信号以实现高精度情绪识别。然而现有方法多忽视不同脑区间的动态交互作用,而这种互动对理解情绪随时间展开与演变的机制至关重要,可能有助于提升情绪识别的准确性。为此,我们提出RBTransformer——一种基于Transformer的神经网络架构,该架构在潜在空间中对大脑皮层间神经动力学进行建模,以更好地捕捉结构化神经交互,从而实现高效的基于脑电图的情绪识别。首先将脑电信号转换为频带差分熵标记,再通过电极身份嵌入保留空间溯源信息。这些标记经连续的皮层间多头注意力块处理,构建电极×电极注意力矩阵,使模型能够学习皮层间神经依赖关系。最终特征通过分类头获得预测结果。我们在SEED、DEAP和DREAMER数据集上开展了广泛实验,特别在受试者依赖设定下,针对效价、唤醒度和优势度三个维度(DEAP和DREAMER数据集),分别进行二元与多分类测试。结果表明,所提出的RBTransformer在所有数据集、全部分类设定下的三个维度均超越现有最优方法。源代码详见:https://github.com/nnilayy/RBTransformer。
English
Human emotions are difficult to convey through words and are often abstracted in the process; however, electroencephalogram (EEG) signals can offer a more direct lens into emotional brain activity. Recent studies show that deep learning models can process these signals to perform emotion recognition with high accuracy. However, many existing approaches overlook the dynamic interplay between distinct brain regions, which can be crucial to understanding how emotions unfold and evolve over time, potentially aiding in more accurate emotion recognition. To address this, we propose RBTransformer, a Transformer-based neural network architecture that models inter-cortical neural dynamics of the brain in latent space to better capture structured neural interactions for effective EEG-based emotion recognition. First, the EEG signals are converted into Band Differential Entropy (BDE) tokens, which are then passed through Electrode Identity embeddings to retain spatial provenance. These tokens are processed through successive inter-cortical multi-head attention blocks that construct an electrode x electrode attention matrix, allowing the model to learn the inter-cortical neural dependencies. The resulting features are then passed through a classification head to obtain the final prediction. We conducted extensive experiments, specifically under subject-dependent settings, on the SEED, DEAP, and DREAMER datasets, over all three dimensions, Valence, Arousal, and Dominance (for DEAP and DREAMER), under both binary and multi-class classification settings. The results demonstrate that the proposed RBTransformer outperforms all previous state-of-the-art methods across all three datasets, over all three dimensions under both classification settings. The source code is available at: https://github.com/nnilayy/RBTransformer.