音乐魔术师:通过扩散模型实现零-shot文本到音乐编辑
MusicMagus: Zero-Shot Text-to-Music Editing via Diffusion Models
February 9, 2024
作者: Yixiao Zhang, Yukara Ikemiya, Gus Xia, Naoki Murata, Marco Martínez, Wei-Hsiang Liao, Yuki Mitsufuji, Simon Dixon
cs.AI
摘要
最近在文本转音乐生成模型方面取得的进展为音乐创作开辟了新的途径。然而,音乐生成通常涉及迭代的改进,如何编辑生成的音乐仍然是一个重要挑战。本文介绍了一种新颖的方法,用于编辑由这些模型生成的音乐,实现修改特定属性(如流派、情绪和乐器),同时保持其他方面不变。我们的方法将文本编辑转换为潜在空间操作,同时添加额外约束以强制一致性。它与现有的预训练文本转音乐扩散模型无缝集成,无需额外训练。实验结果表明,在风格和音色转移评估中,我们的方法在零样本和某些监督基线上表现出优越性能。此外,我们展示了我们的方法在真实音乐编辑场景中的实际适用性。
English
Recent advances in text-to-music generation models have opened new avenues in
musical creativity. However, music generation usually involves iterative
refinements, and how to edit the generated music remains a significant
challenge. This paper introduces a novel approach to the editing of music
generated by such models, enabling the modification of specific attributes,
such as genre, mood and instrument, while maintaining other aspects unchanged.
Our method transforms text editing to latent space manipulation while
adding an extra constraint to enforce consistency. It seamlessly integrates
with existing pretrained text-to-music diffusion models without requiring
additional training. Experimental results demonstrate superior performance over
both zero-shot and certain supervised baselines in style and timbre transfer
evaluations. Additionally, we showcase the practical applicability of our
approach in real-world music editing scenarios.