音乐控制网络:用于音乐生成的多个时变控制
Music ControlNet: Multiple Time-varying Controls for Music Generation
November 13, 2023
作者: Shih-Lun Wu, Chris Donahue, Shinji Watanabe, Nicholas J. Bryan
cs.AI
摘要
文本生成音乐模型现在能够生成各种风格的高质量音乐音频。然而,文本控制主要适用于操纵全局音乐属性,如流派、情绪和速度,对于精确控制时间变化属性,如节拍在时间轴上的位置或音乐动态的变化则不太适用。我们提出了Music ControlNet,这是一种基于扩散的音乐生成模型,可以提供对生成音频的多个精确的、时变的控制。为了赋予文本生成音乐模型时变控制能力,我们提出了一种类似于图像域ControlNet方法的像素级控制方法。具体地,我们从训练音频中提取控制信息,形成配对数据,并对音频频谱图进行扩散条件生成模型的微调,给定旋律、动态和节奏控制。虽然图像域Uni-ControlNet方法已经允许使用任意子集的控制进行生成,但我们设计了一种新策略,允许创作者输入部分时间上仅部分指定的控制。我们评估了从音频中提取的控制和我们期望创作者提供的控制,在这两种情况下展示了我们可以生成与控制输入相对应的逼真音乐。虽然目前存在很少可比较的音乐生成模型,我们对MusicGen进行了基准测试,这是一个接受文本和旋律输入的最新模型,并展示了我们的模型生成的音乐与输入旋律更为贴近,尽管参数数量少了35倍,训练数据减少了11倍,同时实现了两种额外的时变控制形式。可在https://MusicControlNet.github.io/web/找到声音示例。
English
Text-to-music generation models are now capable of generating high-quality
music audio in broad styles. However, text control is primarily suitable for
the manipulation of global musical attributes like genre, mood, and tempo, and
is less suitable for precise control over time-varying attributes such as the
positions of beats in time or the changing dynamics of the music. We propose
Music ControlNet, a diffusion-based music generation model that offers multiple
precise, time-varying controls over generated audio. To imbue text-to-music
models with time-varying control, we propose an approach analogous to
pixel-wise control of the image-domain ControlNet method. Specifically, we
extract controls from training audio yielding paired data, and fine-tune a
diffusion-based conditional generative model over audio spectrograms given
melody, dynamics, and rhythm controls. While the image-domain Uni-ControlNet
method already allows generation with any subset of controls, we devise a new
strategy to allow creators to input controls that are only partially specified
in time. We evaluate both on controls extracted from audio and controls we
expect creators to provide, demonstrating that we can generate realistic music
that corresponds to control inputs in both settings. While few comparable music
generation models exist, we benchmark against MusicGen, a recent model that
accepts text and melody input, and show that our model generates music that is
49% more faithful to input melodies despite having 35x fewer parameters,
training on 11x less data, and enabling two additional forms of time-varying
control. Sound examples can be found at https://MusicControlNet.github.io/web/.