ChatPaper.aiChatPaper

Brain2Music:從人類大腦活動重建音樂

Brain2Music: Reconstructing Music from Human Brain Activity

July 20, 2023
作者: Timo I. Denk, Yu Takagi, Takuya Matsuyama, Andrea Agostinelli, Tomoya Nakai, Christian Frank, Shinji Nishimoto
cs.AI

摘要

從人類大腦活動重建經驗的過程提供了一個獨特的視角,讓我們能夠了解大腦如何解釋和表徵世界。在本文中,我們介紹一種從功能性磁共振成像(fMRI)捕獲的大腦活動中重建音樂的方法。我們的方法使用音樂檢索或MusicLM音樂生成模型,條件是根據從fMRI數據中提取的嵌入。生成的音樂與人類受試者體驗到的音樂刺激相似,具有類似的語義特性,如流派、樂器和情緒。我們通過基於體素的編碼建模分析探討了MusicLM不同組件與大腦活動之間的關係。此外,我們討論了哪些大腦區域代表了純粹文字描述的音樂刺激所衍生的信息。我們提供了補充資料,包括重建音樂的示例,網址為https://google-research.github.io/seanet/brain2music
English
The process of reconstructing experiences from human brain activity offers a unique lens into how the brain interprets and represents the world. In this paper, we introduce a method for reconstructing music from brain activity, captured using functional magnetic resonance imaging (fMRI). Our approach uses either music retrieval or the MusicLM music generation model conditioned on embeddings derived from fMRI data. The generated music resembles the musical stimuli that human subjects experienced, with respect to semantic properties like genre, instrumentation, and mood. We investigate the relationship between different components of MusicLM and brain activity through a voxel-wise encoding modeling analysis. Furthermore, we discuss which brain regions represent information derived from purely textual descriptions of music stimuli. We provide supplementary material including examples of the reconstructed music at https://google-research.github.io/seanet/brain2music
PDF410December 15, 2024