ChatPaper.aiChatPaper

UniAudio:面向通用音频生成的音频基础模型

UniAudio: An Audio Foundation Model Toward Universal Audio Generation

October 1, 2023
作者: Dongchao Yang, Jinchuan Tian, Xu Tan, Rongjie Huang, Songxiang Liu, Xuankai Chang, Jiatong Shi, Sheng Zhao, Jiang Bian, Xixin Wu, Zhou Zhao, Helen Meng
cs.AI

摘要

语言模型(LM)已展现出处理多种生成任务的能力。本文提出的UniAudio系统,不同于以往针对特定任务的方法,利用LM技术根据给定输入条件生成多种音频类型(包括语音、音效、音乐和歌声)。UniAudio 1)首先对所有目标音频及其他条件模态进行标记化处理,2)将源-目标对拼接为单一序列,3)使用LM执行下一标记预测。此外,针对神经编解码器基于残差向量量化的标记化过程导致的超长序列问题,本文提出了多尺度Transformer模型进行处理。基于所有生成任务,UniAudio的训练规模扩展至16.5万小时音频数据和10亿参数,旨在获取对音频内在特性及音频与其他模态间相互关系的充分先验知识。因此,训练后的UniAudio模型有望成为通用音频生成的基础模型:它在所有已训练任务中均表现出强大能力,并能在简单微调后无缝支持新的音频生成任务。实验表明,在11项任务中的大多数任务上,UniAudio均达到了最先进或至少具有竞争力的结果。演示和代码已发布于https://github.com/yangdongchao/UniAudio。
English
Language models (LMs) have demonstrated the capability to handle a variety of generative tasks. This paper presents the UniAudio system, which, unlike prior task-specific approaches, leverages LMs techniques to generate multiple types of audio (including speech, sounds, music, and singing) with given input conditions. UniAudio 1) first tokenizes all types of target audio along with other condition modalities, 2) concatenates source-target pair as a single sequence, and 3) performs next-token prediction using LMs. Also, a multi-scale Transformer model is proposed to handle the overly long sequences caused by the residual vector quantization based neural codec in tokenization. Training of UniAudio is scaled up to 165K hours of audio and 1B parameters, based on all generative tasks, aiming to obtain sufficient prior knowledge not only in the intrinsic properties of audio but also the inter-relationship between audio and other modalities. Therefore, the trained UniAudio model has the potential to become a foundation model for universal audio generation: it shows strong capability in all trained tasks and can seamlessly support new audio generation tasks after simple fine-tuning. Experiments demonstrate that UniAudio achieves state-of-the-art or at least competitive results on most of the 11 tasks. Demo and code are released at https://github.com/yangdongchao/UniAudio
PDF201March 22, 2026