遵我所言:一个用于指令跟随的语音提示数据集
Do What I Say: A Spoken Prompt Dataset for Instruction-Following
March 10, 2026
作者: Maike Züfle, Sara Papi, Fabian Retkowski, Szymon Mazurek, Marek Kasztelnik, Alexander Waibel, Luisa Bentivogli, Jan Niehues
cs.AI
摘要
語音大語言模型(SLLMs)正迅速發展,已能支持多種任務。目前這類模型通常使用文本提示進行評估,但這種方式可能無法反映用戶通過語音交互的真實場景。為彌合這一差距,我們推出了DoWhatISay(DOWIS)多語言數據集,該數據集包含人工錄製的口語與書面提示,可與現有任意基準測試配對使用,實現SLLMs在語音指令情境下的真實評估。該數據集涵蓋9類任務和11種語言,每組任務-語言配對提供涵蓋五種風格的10種提示變體。通過DOWIS,我們對前沿SLLMs進行基準測試,分析了提示模態、風格、語言與任務類型之間的相互作用。結果表明,文本提示的表現始終優於語音提示,尤其在低資源和跨語言場景中更為明顯。僅在語音輸出類任務中,語音提示才能縮小這一差距,這凸顯了基於語音的提示在SLLM評估中的必要性。
English
Speech Large Language Models (SLLMs) have rapidly expanded, supporting a wide range of tasks. These models are typically evaluated using text prompts, which may not reflect real-world scenarios where users interact with speech. To address this gap, we introduce DoWhatISay (DOWIS), a multilingual dataset of human-recorded spoken and written prompts designed to pair with any existing benchmark for realistic evaluation of SLLMs under spoken instruction conditions. Spanning 9 tasks and 11 languages, it provides 10 prompt variants per task-language pair, across five styles. Using DOWIS, we benchmark state-of-the-art SLLMs, analyzing the interplay between prompt modality, style, language, and task type. Results show that text prompts consistently outperform spoken prompts, particularly for low-resource and cross-lingual settings. Only for tasks with speech output, spoken prompts do close the gap, highlighting the need for speech-based prompting in SLLM evaluation.