ChatPaper.aiChatPaper

脑波编码千种表征:基于脑电信号的有效情绪识别中皮层间神经交互建模研究

A Brain Wave Encodes a Thousand Tokens: Modeling Inter-Cortical Neural Interactions for Effective EEG-based Emotion Recognition

November 17, 2025
作者: Nilay Kumar, Priyansh Bhandari, G. Maragatham
cs.AI

摘要

人類情感難以通過語言準確傳達,且在表述過程中常被抽象化;而腦電圖信號則能為情感性大腦活動提供更直接的觀測視窗。近期研究表明,深度學習模型可通過處理這些信號實現高精度情感識別。然而現有方法大多忽略了大腦不同區域間的動態交互作用,這種動態特性對於理解情感隨時間展開與演變的規律至關重要,或能有效提升情感識別的準確性。為此,我們提出RBTransformer——一種基於Transformer的神經網絡架構,該架構在潛空間中模擬大腦皮層間神經動力學,以更好地捕捉結構化神經交互,從而實現高效的基於EEG的情感識別。首先將EEG信號轉換為頻帶差分熵特徵標記,並通過電極身份嵌入層保留空間溯源信息。這些特徵標記經由連續的皮層間多頭注意力模塊處理,構建電極×電極注意力矩陣,使模型能夠學習皮層間神經依賴關係。最終特徵通過分類頭部網絡獲得預測結果。我們在SEED、DEAP和DREAMER數據集上開展了廣泛實驗,特別在受試者依賴設定下,針對效價、喚醒度和優勢度三個維度(DEAP和DREAMER數據集),分別進行了二分類與多分類任務測試。結果表明,所提出的RBTransformer在所有數據集的三個維度上,兩種分類設定下均超越既往最佳方法。源代碼已開源於:https://github.com/nnilayy/RBTransformer。
English
Human emotions are difficult to convey through words and are often abstracted in the process; however, electroencephalogram (EEG) signals can offer a more direct lens into emotional brain activity. Recent studies show that deep learning models can process these signals to perform emotion recognition with high accuracy. However, many existing approaches overlook the dynamic interplay between distinct brain regions, which can be crucial to understanding how emotions unfold and evolve over time, potentially aiding in more accurate emotion recognition. To address this, we propose RBTransformer, a Transformer-based neural network architecture that models inter-cortical neural dynamics of the brain in latent space to better capture structured neural interactions for effective EEG-based emotion recognition. First, the EEG signals are converted into Band Differential Entropy (BDE) tokens, which are then passed through Electrode Identity embeddings to retain spatial provenance. These tokens are processed through successive inter-cortical multi-head attention blocks that construct an electrode x electrode attention matrix, allowing the model to learn the inter-cortical neural dependencies. The resulting features are then passed through a classification head to obtain the final prediction. We conducted extensive experiments, specifically under subject-dependent settings, on the SEED, DEAP, and DREAMER datasets, over all three dimensions, Valence, Arousal, and Dominance (for DEAP and DREAMER), under both binary and multi-class classification settings. The results demonstrate that the proposed RBTransformer outperforms all previous state-of-the-art methods across all three datasets, over all three dimensions under both classification settings. The source code is available at: https://github.com/nnilayy/RBTransformer.
PDF32December 1, 2025