VITA-Audio:高效大型语音语言模型中的快速交错跨模态令牌生成
VITA-Audio: Fast Interleaved Cross-Modal Token Generation for Efficient Large Speech-Language Model
May 6, 2025
作者: Zuwei Long, Yunhang Shen, Chaoyou Fu, Heting Gao, Lijiang Li, Peixian Chen, Mengdan Zhang, Hang Shao, Jian Li, Jinlong Peng, Haoyu Cao, Ke Li, Rongrong Ji, Xing Sun
cs.AI
摘要
隨著對自然人機互動需求的日益增長,基於語音的系統因其作為日常交流中最普遍的形式之一而受到越來越多的關注。然而,現有的語音模型在流式處理過程中生成首個音頻標記時仍面臨高延遲問題,這成為部署過程中的一大瓶頸。為解決這一問題,我們提出了VITA-Audio,這是一種能夠快速生成音頻-文本標記的端到端大型語音模型。具體而言,我們引入了一種輕量級的多模態交叉標記預測(MCTP)模塊,該模塊能在單次模型前向傳播中高效生成多個音頻標記,不僅加速了推理過程,還顯著降低了流式場景下生成首個音頻的延遲。此外,我們探索了一種四階段漸進式訓練策略,以在最小化語音質量損失的前提下實現模型加速。據我們所知,VITA-Audio是首個能在首次前向傳播中生成音頻輸出的多模態大型語言模型,具備了以最小延遲實現實時對話的能力。VITA-Audio完全可復現,且僅基於開源數據進行訓練。實驗結果表明,我們的模型在7B參數規模下實現了3至5倍的推理加速,同時在多個自動語音識別(ASR)、文本轉語音(TTS)及語音問答(SQA)任務的基準測試中,顯著超越了同規模的開源模型。
English
With the growing requirement for natural human-computer interaction,
speech-based systems receive increasing attention as speech is one of the most
common forms of daily communication. However, the existing speech models still
experience high latency when generating the first audio token during streaming,
which poses a significant bottleneck for deployment. To address this issue, we
propose VITA-Audio, an end-to-end large speech model with fast audio-text token
generation. Specifically, we introduce a lightweight Multiple Cross-modal Token
Prediction (MCTP) module that efficiently generates multiple audio tokens
within a single model forward pass, which not only accelerates the inference
but also significantly reduces the latency for generating the first audio in
streaming scenarios. In addition, a four-stage progressive training strategy is
explored to achieve model acceleration with minimal loss of speech quality. To
our knowledge, VITA-Audio is the first multi-modal large language model capable
of generating audio output during the first forward pass, enabling real-time
conversational capabilities with minimal latency. VITA-Audio is fully
reproducible and is trained on open-source data only. Experimental results
demonstrate that our model achieves an inference speedup of 3~5x at the 7B
parameter scale, but also significantly outperforms open-source models of
similar model size on multiple benchmarks for automatic speech recognition
(ASR), text-to-speech (TTS), and spoken question answering (SQA) tasks.Summary
AI-Generated Summary