WavLLM:朝向強健且適應性語音大型語言模型的方向
WavLLM: Towards Robust and Adaptive Speech Large Language Model
March 31, 2024
作者: Shujie Hu, Long Zhou, Shujie Liu, Sanyuan Chen, Hongkun Hao, Jing Pan, Xunying Liu, Jinyu Li, Sunit Sivasankaran, Linquan Liu, Furu Wei
cs.AI
摘要
近年來大型語言模型(LLMs)的最新進展已經徹底改變了自然語言處理領域,逐漸擴大了它們的範圍至多模態感知和生成。然而,將聆聽能力有效地整合到LLMs中存在著重大挑戰,特別是在泛化各種情境並執行複雜聽覺任務方面。在這項工作中,我們介紹了WavLLM,這是一個具有雙編碼器和prompt-aware LoRA權重適配器的強大且適應性強的語音大型語言模型,通過兩階段課程學習方法進行優化。利用雙編碼器,我們將不同類型的語音信息解耦,利用Whisper編碼器處理語音的語義內容,並使用WavLM編碼器捕捉說話者身份的獨特特徵。在課程學習框架內,WavLLM首先通過優化混合基本單一任務來建立其基礎能力,然後在更複雜任務(如基本任務的組合)上進行高級多任務訓練。為了增強靈活性並遵循不同任務和指令,我們在第二個高級多任務訓練階段引入了prompt-aware LoRA權重適配器。我們在通用語音基準上驗證了所提出的模型,包括ASR、ST、SV、ER等任務,並將其應用於像高考英語聽力理解SQA和語音思維鏈(CoT)評估集這樣的專門數據集。實驗表明,所提出的模型在相同模型尺寸下在各種語音任務上實現了最先進的性能,展現了使用CoT方法執行複雜任務的強大泛化能力。此外,我們的模型成功完成了高考任務而無需專門訓練。代碼、模型、音頻和高考評估集可通過aka.ms/wavllm進行訪問。
English
The recent advancements in large language models (LLMs) have revolutionized
the field of natural language processing, progressively broadening their scope
to multimodal perception and generation. However, effectively integrating
listening capabilities into LLMs poses significant challenges, particularly
with respect to generalizing across varied contexts and executing complex
auditory tasks. In this work, we introduce WavLLM, a robust and adaptive speech
large language model with dual encoders, and a prompt-aware LoRA weight
adapter, optimized by a two-stage curriculum learning approach. Leveraging dual
encoders, we decouple different types of speech information, utilizing a
Whisper encoder to process the semantic content of speech, and a WavLM encoder
to capture the unique characteristics of the speaker's identity. Within the
curriculum learning framework, WavLLM first builds its foundational
capabilities by optimizing on mixed elementary single tasks, followed by
advanced multi-task training on more complex tasks such as combinations of the
elementary tasks. To enhance the flexibility and adherence to different tasks
and instructions, a prompt-aware LoRA weight adapter is introduced in the
second advanced multi-task training stage. We validate the proposed model on
universal speech benchmarks including tasks such as ASR, ST, SV, ER, and also
apply it to specialized datasets like Gaokao English listening comprehension
set for SQA, and speech Chain-of-Thought (CoT) evaluation set. Experiments
demonstrate that the proposed model achieves state-of-the-art performance
across a range of speech tasks on the same model size, exhibiting robust
generalization capabilities in executing complex tasks using CoT approach.
Furthermore, our model successfully completes Gaokao tasks without specialized
training. The codes, models, audio, and Gaokao evaluation set can be accessed
at aka.ms/wavllm.Summary
AI-Generated Summary