ChatPaper.aiChatPaper

UniAudio:通向通用音频生成的音频基础模型

UniAudio: An Audio Foundation Model Toward Universal Audio Generation

October 1, 2023
作者: Dongchao Yang, Jinchuan Tian, Xu Tan, Rongjie Huang, Songxiang Liu, Xuankai Chang, Jiatong Shi, Sheng Zhao, Jiang Bian, Xixin Wu, Zhou Zhao, Helen Meng
cs.AI

摘要

语言模型(LMs)已经展示了处理各种生成任务的能力。本文介绍了UniAudio系统,与先前的特定任务方法不同,该系统利用LMs技术生成多种类型的音频(包括语音、声音、音乐和歌唱),并给定输入条件。UniAudio 1)首先对所有类型的目标音频以及其他条件模态进行标记化,2)将源-目标对连接为单个序列,3)使用LMs执行下一个标记预测。此外,提出了一个多尺度Transformer模型,用于处理由基于残差矢量量化的神经编解码器在标记化中引起的过长序列。UniAudio的训练扩展到了165K小时的音频和10亿参数,基于所有生成任务,旨在获得足够的先验知识,不仅涉及音频的内在属性,还包括音频与其他模态之间的相互关系。因此,经过训练的UniAudio模型有潜力成为通用音频生成的基础模型:它在所有训练任务中表现出强大的能力,并且可以在简单微调后无缝支持新的音频生成任务。实验表明,UniAudio在11个任务中大多数任务上实现了最先进或至少具有竞争力的结果。演示和代码发布在https://github.com/yangdongchao/UniAudio
English
Language models (LMs) have demonstrated the capability to handle a variety of generative tasks. This paper presents the UniAudio system, which, unlike prior task-specific approaches, leverages LMs techniques to generate multiple types of audio (including speech, sounds, music, and singing) with given input conditions. UniAudio 1) first tokenizes all types of target audio along with other condition modalities, 2) concatenates source-target pair as a single sequence, and 3) performs next-token prediction using LMs. Also, a multi-scale Transformer model is proposed to handle the overly long sequences caused by the residual vector quantization based neural codec in tokenization. Training of UniAudio is scaled up to 165K hours of audio and 1B parameters, based on all generative tasks, aiming to obtain sufficient prior knowledge not only in the intrinsic properties of audio but also the inter-relationship between audio and other modalities. Therefore, the trained UniAudio model has the potential to become a foundation model for universal audio generation: it shows strong capability in all trained tasks and can seamlessly support new audio generation tasks after simple fine-tuning. Experiments demonstrate that UniAudio achieves state-of-the-art or at least competitive results on most of the 11 tasks. Demo and code are released at https://github.com/yangdongchao/UniAudio
PDF211December 15, 2024