UniAudio:邁向通用音訊生成的音訊基礎模型
UniAudio: An Audio Foundation Model Toward Universal Audio Generation
October 1, 2023
作者: Dongchao Yang, Jinchuan Tian, Xu Tan, Rongjie Huang, Songxiang Liu, Xuankai Chang, Jiatong Shi, Sheng Zhao, Jiang Bian, Xixin Wu, Zhou Zhao, Helen Meng
cs.AI
摘要
語言模型已展現出處理多種生成任務的能力。本文提出的UniAudio系統有別於先前針對特定任務的方法,透過語言模型技術在給定輸入條件下生成多種類型音訊(包含語音、音效、音樂及歌聲)。UniAudio具備三階段架構:1)首先將所有目標音訊與其他條件模態進行標記化處理;2)把來源-目標對串接為單一序列;3)運用語言模型執行下一標記預測。為解決神經編解碼器基於殘差向量量化所產生過長序列的問題,本研究另提出多尺度Transformer模型。UniAudio的訓練規模擴展至16.5萬小時音訊資料與10億參數,涵蓋所有生成任務,旨在同時獲取音訊本質特性及其與多模態間關聯的充分先驗知識。因此訓練完成的UniAudio模型有潛力成為通用音訊生成的基礎模型:其在所有訓練任務中均展現強大能力,並能透過簡單微調無縫支援新音訊生成任務。實驗結果表明,UniAudio在11項任務中的大多數項目上達到最先進或至少具競爭力的表現。演示與程式碼已發布於https://github.com/yangdongchao/UniAudio。
English
Language models (LMs) have demonstrated the capability to handle a variety of
generative tasks. This paper presents the UniAudio system, which, unlike prior
task-specific approaches, leverages LMs techniques to generate multiple types
of audio (including speech, sounds, music, and singing) with given input
conditions. UniAudio 1) first tokenizes all types of target audio along with
other condition modalities, 2) concatenates source-target pair as a single
sequence, and 3) performs next-token prediction using LMs. Also, a multi-scale
Transformer model is proposed to handle the overly long sequences caused by the
residual vector quantization based neural codec in tokenization. Training of
UniAudio is scaled up to 165K hours of audio and 1B parameters, based on all
generative tasks, aiming to obtain sufficient prior knowledge not only in the
intrinsic properties of audio but also the inter-relationship between audio and
other modalities. Therefore, the trained UniAudio model has the potential to
become a foundation model for universal audio generation: it shows strong
capability in all trained tasks and can seamlessly support new audio generation
tasks after simple fine-tuning. Experiments demonstrate that UniAudio achieves
state-of-the-art or at least competitive results on most of the 11 tasks. Demo
and code are released at https://github.com/yangdongchao/UniAudio