ChatPaper.aiChatPaper

遵我所言:一个用于指令跟随的口语提示数据集

Do What I Say: A Spoken Prompt Dataset for Instruction-Following

March 10, 2026
作者: Maike Züfle, Sara Papi, Fabian Retkowski, Szymon Mazurek, Marek Kasztelnik, Alexander Waibel, Luisa Bentivogli, Jan Niehues
cs.AI

摘要

语音大语言模型(SLLMs)正迅速发展,已能支持多种任务。当前这些模型通常使用文本提示进行评估,但这种方式可能无法反映用户通过语音交互的真实场景。为弥补这一不足,我们推出了DoWhatISay(DOWIS)多语言数据集,该数据集包含真人录制的口语与书面提示,可与现有任何基准测试配对使用,实现对SLLMs在语音指令场景下的真实评估。该数据集涵盖9类任务和11种语言,每个任务-语言组合提供五种风格各异的10组提示变体。通过DOWIS,我们对最先进的SLLMs进行基准测试,深入分析提示模态、风格、语言及任务类型之间的相互作用。结果表明,文本提示的表现始终优于语音提示,尤其在低资源与跨语言场景中更为明显。仅在语音输出类任务中,语音提示才能显著缩小差距,这凸显了基于语音的提示在SLLM评估中的必要性。
English
Speech Large Language Models (SLLMs) have rapidly expanded, supporting a wide range of tasks. These models are typically evaluated using text prompts, which may not reflect real-world scenarios where users interact with speech. To address this gap, we introduce DoWhatISay (DOWIS), a multilingual dataset of human-recorded spoken and written prompts designed to pair with any existing benchmark for realistic evaluation of SLLMs under spoken instruction conditions. Spanning 9 tasks and 11 languages, it provides 10 prompt variants per task-language pair, across five styles. Using DOWIS, we benchmark state-of-the-art SLLMs, analyzing the interplay between prompt modality, style, language, and task type. Results show that text prompts consistently outperform spoken prompts, particularly for low-resource and cross-lingual settings. Only for tasks with speech output, spoken prompts do close the gap, highlighting the need for speech-based prompting in SLLM evaluation.
PDF61March 12, 2026