ChatPaper.aiChatPaper

AudioLDM 2:通过自监督预训练学习整体音频生成

AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining

August 10, 2023
作者: Haohe Liu, Qiao Tian, Yi Yuan, Xubo Liu, Xinhao Mei, Qiuqiang Kong, Yuping Wang, Wenwu Wang, Yuxuan Wang, Mark D. Plumbley
cs.AI

摘要

尽管音频生成在不同类型的音频(如语音、音乐和音效)之间有共同点,但为每种类型设计模型需要仔细考虑特定目标和偏差,这些偏差可能与其他类型有显著不同。为了让我们更接近音频生成的统一视角,本文提出了一个框架,该框架利用相同的学习方法进行语音、音乐和音效生成。我们的框架引入了一种名为音频语言(LOA)的音频通用表示。任何音频都可以基于AudioMAE转换为LOA,这是一个自监督预训练表示学习模型。在生成过程中,我们使用GPT-2模型将任何模态转换为LOA,并使用以LOA为条件的潜在扩散模型进行自监督音频生成学习。所提出的框架自然带来了诸如上下文学习能力和可重复使用的自监督预训练的AudioMAE和潜在扩散模型等优势。在文本到音频、文本到音乐和文本到语音的主要基准测试上进行的实验表明,与先前方法相比,取得了新的最先进或具有竞争力的性能。我们的演示和代码可在https://audioldm.github.io/audioldm2 上找到。
English
Although audio generation shares commonalities across different types of audio, such as speech, music, and sound effects, designing models for each type requires careful consideration of specific objectives and biases that can significantly differ from those of other types. To bring us closer to a unified perspective of audio generation, this paper proposes a framework that utilizes the same learning method for speech, music, and sound effect generation. Our framework introduces a general representation of audio, called language of audio (LOA). Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model. In the generation process, we translate any modalities into LOA by using a GPT-2 model, and we perform self-supervised audio generation learning with a latent diffusion model conditioned on LOA. The proposed framework naturally brings advantages such as in-context learning abilities and reusable self-supervised pretrained AudioMAE and latent diffusion models. Experiments on the major benchmarks of text-to-audio, text-to-music, and text-to-speech demonstrate new state-of-the-art or competitive performance to previous approaches. Our demo and code are available at https://audioldm.github.io/audioldm2.
PDF371December 15, 2024