LLaMA-Omni:與大型語言模型無縫對話
LLaMA-Omni: Seamless Speech Interaction with Large Language Models
September 10, 2024
作者: Qingkai Fang, Shoutao Guo, Yan Zhou, Zhengrui Ma, Shaolei Zhang, Yang Feng
cs.AI
摘要
像GPT-4o這樣的模型使得通過語音與大型語言模型(LLMs)進行實時互動成為可能,相較於傳統基於文本的互動,顯著提升了用戶體驗。然而,目前尚缺乏對如何基於開源LLMs構建語音互動模型的探索。為了解決這一問題,我們提出了LLaMA-Omni,這是一種新穎的模型架構,旨在實現與LLMs的低延遲和高質量語音互動。LLaMA-Omni整合了預訓練的語音編碼器、語音適配器、LLM和流式語音解碼器。它消除了對語音轉錄的需求,可以同時從語音指令中直接生成文本和語音回應,並實現極低的延遲。我們基於最新的Llama-3.1-8B-Instruct模型構建了我們的模型。為了使模型與語音互動場景保持一致,我們構建了一個名為InstructS2S-200K的數據集,其中包括20萬個語音指令和相應的語音回應。實驗結果表明,與先前的語音語言模型相比,LLaMA-Omni在內容和風格上提供了更好的回應,回應延遲低至226毫秒。此外,僅需4個GPU,訓練LLaMA-Omni不到3天的時間,為未來高效開發語音語言模型鋪平了道路。
English
Models like GPT-4o enable real-time interaction with large language models
(LLMs) through speech, significantly enhancing user experience compared to
traditional text-based interaction. However, there is still a lack of
exploration on how to build speech interaction models based on open-source
LLMs. To address this, we propose LLaMA-Omni, a novel model architecture
designed for low-latency and high-quality speech interaction with LLMs.
LLaMA-Omni integrates a pretrained speech encoder, a speech adaptor, an LLM,
and a streaming speech decoder. It eliminates the need for speech
transcription, and can simultaneously generate text and speech responses
directly from speech instructions with extremely low latency. We build our
model based on the latest Llama-3.1-8B-Instruct model. To align the model with
speech interaction scenarios, we construct a dataset named InstructS2S-200K,
which includes 200K speech instructions and corresponding speech responses.
Experimental results show that compared to previous speech-language models,
LLaMA-Omni provides better responses in both content and style, with a response
latency as low as 226ms. Additionally, training LLaMA-Omni takes less than 3
days on just 4 GPUs, paving the way for the efficient development of
speech-language models in the future.Summary
AI-Generated Summary