基于有限数据的MEG到MEG迁移学习及跨任务语音/静默检测
MEG-to-MEG Transfer Learning and Cross-Task Speech/Silence Detection with Limited Data
February 20, 2026
作者: Xabier de Zuazo, Vincenzo Verbeni, Eva Navas, Ibon Saratxaga, Mathieu Bourguignon, Nicola Molinaro
cs.AI
摘要
数据高效的神经解码是语音脑机接口面临的核心挑战。本研究首次实现了基于脑磁图的语音感知与生成模型的跨任务迁移学习解码。我们采用Conformer架构,在单被试50小时的听觉数据上进行预训练,随后对18名被试每人仅用5分钟数据进行微调。迁移学习带来了持续的性能提升:任务内解码准确率提高1-4%,跨任务解码提升幅度更大,达5-6%。预训练不仅提升了各任务内部性能,更实现了感知与生成任务间的可靠跨任务解码。关键发现表明,经过语音生成训练的模型对被动听觉任务也能实现超随机水平的解码,这证实了所学表征反映了共享的神经加工过程,而非任务特定的运动活动。
English
Data-efficient neural decoding is a central challenge for speech brain-computer interfaces. We present the first demonstration of transfer learning and cross-task decoding for MEG-based speech models spanning perception and production. We pre-train a Conformer-based model on 50 hours of single-subject listening data and fine-tune on just 5 minutes per subject across 18 participants. Transfer learning yields consistent improvements, with in-task accuracy gains of 1-4% and larger cross-task gains of up to 5-6%. Not only does pre-training improve performance within each task, but it also enables reliable cross-task decoding between perception and production. Critically, models trained on speech production decode passive listening above chance, confirming that learned representations reflect shared neural processes rather than task-specific motor activity.