S2S-Arena:基於副語言資訊的語音轉語音協議指令遵循評估
S2S-Arena, Evaluating Speech2Speech Protocols on Instruction Following with Paralinguistic Information
March 7, 2025
作者: Feng Jiang, Zhiyu Lin, Fan Bu, Yuhao Du, Benyou Wang, Haizhou Li
cs.AI
摘要
大型語言模型(LLMs)的快速發展,使得語音模型受到了極大關注,尤其是近期在支持語音輸入與輸出的speech2speech協議方面取得的進展。然而,現有的基準測試採用基於文本的自動評估器來評估這些模型的指令遵循能力,卻缺乏對語音理解與生成中副語言信息的考量。為解決這些問題,我們引入了S2S-Arena,這是一個新穎的競技場式S2S基準測試,旨在跨現實世界任務中,評估模型在包含副語言信息的語音輸入與輸出中的指令遵循能力。我們設計了154個樣本,融合了TTS與實時錄音,涵蓋四個領域的21項任務,並以競技場方式手動評估現有熱門語音模型。實驗結果表明:(1)除了GPT-4o的卓越表現外,在speech2speech協議中,串聯ASR、LLM和TTS的語音模型在文本-語音對齊後,其性能優於聯合訓練的模型;(2)考慮到副語言信息,語音模型的知識性主要依賴於LLM骨幹,而其多語言支持則受限於語音模塊;(3)優秀的語音模型已能理解語音輸入中的副語言信息,但生成帶有適當副語言信息的音頻仍是一大挑戰。
English
The rapid development of large language models (LLMs) has brought significant
attention to speech models, particularly recent progress in speech2speech
protocols supporting speech input and output. However, the existing benchmarks
adopt automatic text-based evaluators for evaluating the instruction following
ability of these models lack consideration for paralinguistic information in
both speech understanding and generation. To address these issues, we introduce
S2S-Arena, a novel arena-style S2S benchmark that evaluates
instruction-following capabilities with paralinguistic information in both
speech-in and speech-out across real-world tasks. We design 154 samples that
fused TTS and live recordings in four domains with 21 tasks and manually
evaluate existing popular speech models in an arena-style manner. The
experimental results show that: (1) in addition to the superior performance of
GPT-4o, the speech model of cascaded ASR, LLM, and TTS outperforms the jointly
trained model after text-speech alignment in speech2speech protocols; (2)
considering paralinguistic information, the knowledgeability of the speech
model mainly depends on the LLM backbone, and the multilingual support of that
is limited by the speech module; (3) excellent speech models can already
understand the paralinguistic information in speech input, but generating
appropriate audio with paralinguistic information is still a challenge.Summary
AI-Generated Summary