ChatPaper.aiChatPaper

Brain2Music:从人类大脑活动重建音乐

Brain2Music: Reconstructing Music from Human Brain Activity

July 20, 2023
作者: Timo I. Denk, Yu Takagi, Takuya Matsuyama, Andrea Agostinelli, Tomoya Nakai, Christian Frank, Shinji Nishimoto
cs.AI

摘要

从人类大脑活动中重建经验的过程为我们提供了独特的视角,揭示了大脑如何解释和表征世界。在本文中,我们介绍了一种从功能磁共振成像(fMRI)捕获的大脑活动中重建音乐的方法。我们的方法使用音乐检索或MusicLM音乐生成模型,该模型以从fMRI数据中导出的嵌入为条件。生成的音乐与人类实验对象体验到的音乐刺激相似,涉及语义属性如流派、乐器和情绪。我们通过基于体素的编码建模分析探讨了MusicLM不同组件与大脑活动之间的关系。此外,我们讨论了哪些大脑区域代表了纯文本描述的音乐刺激所导出的信息。我们提供了包括重建音乐示例在内的补充材料,网址为https://google-research.github.io/seanet/brain2music。
English
The process of reconstructing experiences from human brain activity offers a unique lens into how the brain interprets and represents the world. In this paper, we introduce a method for reconstructing music from brain activity, captured using functional magnetic resonance imaging (fMRI). Our approach uses either music retrieval or the MusicLM music generation model conditioned on embeddings derived from fMRI data. The generated music resembles the musical stimuli that human subjects experienced, with respect to semantic properties like genre, instrumentation, and mood. We investigate the relationship between different components of MusicLM and brain activity through a voxel-wise encoding modeling analysis. Furthermore, we discuss which brain regions represent information derived from purely textual descriptions of music stimuli. We provide supplementary material including examples of the reconstructed music at https://google-research.github.io/seanet/brain2music
PDF410December 15, 2024