Qwen-Audio:透過統一的大規模音訊語言模型推進通用音訊理解
Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language Models
November 14, 2023
作者: Yunfei Chu, Jin Xu, Xiaohuan Zhou, Qian Yang, Shiliang Zhang, Zhijie Yan, Chang Zhou, Jingren Zhou
cs.AI
摘要
最近,指令遵循的語音語言模型因其與人類的語音互動而受到廣泛關注。然而,缺乏能夠處理多樣語音類型和任務的預訓練語音模型,阻礙了這一領域的進展。因此,大多數現有研究僅能支持有限範圍的互動能力。在本文中,我們開發了 Qwen-Audio 模型,通過擴展語音語言預訓練範圍至超過 30 個任務和各種語音類型,如人類語音、自然聲音、音樂和歌曲,以促進通用語音理解能力,從而解決了這一限制。然而,直接共同訓練所有任務和數據集可能導致干擾問題,因為不同數據集的文本標籤由於任務焦點、語言、標註的細節和文本結構的差異而存在顯著變化。為了克服這種一對多的干擾,我們通過在解碼器上條件化一系列階層標籤來精心設計多任務訓練框架,以鼓勵知識共享,並通過共享和指定的標籤分別避免干擾。值得注意的是,Qwen-Audio 在各種基準任務上取得了令人印象深刻的性能,而無需進行任務特定的微調,超越了其競爭對手。基於 Qwen-Audio 的能力,我們進一步開發了 Qwen-Audio-Chat,允許從各種語音和文本輸入進行輸入,實現多輪對話並支持各種以語音為中心的情境。
English
Recently, instruction-following audio-language models have received broad
attention for audio interaction with humans. However, the absence of
pre-trained audio models capable of handling diverse audio types and tasks has
hindered progress in this field. Consequently, most existing works have only
been able to support a limited range of interaction capabilities. In this
paper, we develop the Qwen-Audio model and address this limitation by scaling
up audio-language pre-training to cover over 30 tasks and various audio types,
such as human speech, natural sounds, music, and songs, to facilitate universal
audio understanding abilities. However, directly co-training all tasks and
datasets can lead to interference issues, as the textual labels associated with
different datasets exhibit considerable variations due to differences in task
focus, language, granularity of annotation, and text structure. To overcome the
one-to-many interference, we carefully design a multi-task training framework
by conditioning on a sequence of hierarchical tags to the decoder for
encouraging knowledge sharing and avoiding interference through shared and
specified tags respectively. Remarkably, Qwen-Audio achieves impressive
performance across diverse benchmark tasks without requiring any task-specific
fine-tuning, surpassing its counterparts. Building upon the capabilities of
Qwen-Audio, we further develop Qwen-Audio-Chat, which allows for input from
various audios and text inputs, enabling multi-turn dialogues and supporting
various audio-central scenarios.