音樂魔法師:通過擴散模型實現零樣本文本轉音樂編輯
MusicMagus: Zero-Shot Text-to-Music Editing via Diffusion Models
February 9, 2024
作者: Yixiao Zhang, Yukara Ikemiya, Gus Xia, Naoki Murata, Marco Martínez, Wei-Hsiang Liao, Yuki Mitsufuji, Simon Dixon
cs.AI
摘要
最近在文本轉音樂生成模型方面取得的進展開拓了音樂創作的新途徑。然而,音樂生成通常涉及迭代的改進,如何編輯生成的音樂仍然是一個重要挑戰。本文介紹了一種新的方法來編輯由這些模型生成的音樂,使得可以修改特定屬性,如流派、情緒和樂器,同時保持其他方面不變。我們的方法將文本編輯轉換為潛在空間操作,同時添加額外的約束以強制保持一致性。它與現有的預訓練文本轉音樂擴散模型無縫集成,無需額外的訓練。實驗結果表明,在風格和音色轉換評估中,我們的方法在零樣本和某些監督基線上展現出優越的性能。此外,我們展示了我們方法在真實音樂編輯場景中的實際應用性。
English
Recent advances in text-to-music generation models have opened new avenues in
musical creativity. However, music generation usually involves iterative
refinements, and how to edit the generated music remains a significant
challenge. This paper introduces a novel approach to the editing of music
generated by such models, enabling the modification of specific attributes,
such as genre, mood and instrument, while maintaining other aspects unchanged.
Our method transforms text editing to latent space manipulation while
adding an extra constraint to enforce consistency. It seamlessly integrates
with existing pretrained text-to-music diffusion models without requiring
additional training. Experimental results demonstrate superior performance over
both zero-shot and certain supervised baselines in style and timbre transfer
evaluations. Additionally, we showcase the practical applicability of our
approach in real-world music editing scenarios.