使用擴散模型和梯度引導的可控音樂製作
Controllable Music Production with Diffusion Models and Guidance Gradients
November 1, 2023
作者: Mark Levy, Bruno Di Giorgi, Floris Weers, Angelos Katharopoulos, Tom Nickson
cs.AI
摘要
我們展示了如何從擴散模型進行條件生成,以應對製作44.1kHz立體音頻音樂的各種現實任務,並提供取樣時間指導。我們考慮的情境包括音樂音頻的延續、修補和再生,創建兩個不同音樂曲目之間的平滑過渡,以及將期望的風格特徵轉移到現有音頻片段。我們通過在取樣時間應用指導,在一個支持重建和分類損失,或兩者任意組合的簡單框架中實現了這一點。這種方法確保生成的音頻可以與其周圍上下文匹配,或者符合相對於任何適合的預先訓練分類器或嵌入模型指定的類分佈或潛在表示。
English
We demonstrate how conditional generation from diffusion models can be used
to tackle a variety of realistic tasks in the production of music in 44.1kHz
stereo audio with sampling-time guidance. The scenarios we consider include
continuation, inpainting and regeneration of musical audio, the creation of
smooth transitions between two different music tracks, and the transfer of
desired stylistic characteristics to existing audio clips. We achieve this by
applying guidance at sampling time in a simple framework that supports both
reconstruction and classification losses, or any combination of the two. This
approach ensures that generated audio can match its surrounding context, or
conform to a class distribution or latent representation specified relative to
any suitable pre-trained classifier or embedding model.