ChatPaper.aiChatPaper

AHELM:音频-语言模型的全方位评估

AHELM: A Holistic Evaluation of Audio-Language Models

August 29, 2025
作者: Tony Lee, Haoqin Tu, Chi Heem Wong, Zijun Wang, Siwei Yang, Yifan Mai, Yuyin Zhou, Cihang Xie, Percy Liang
cs.AI

摘要

音频-语言模型(ALMs)——这类多模态模型以交替的音频和文本作为输入并输出文本——的评估因缺乏标准化基准而受阻;大多数基准仅衡量一两种能力,且忽略了如公平性或安全性等评估维度。此外,跨模型比较困难,因为独立的评估测试模型数量有限,且采用不同的提示方法和推理参数。为弥补这些不足,我们推出了AHELM基准,它整合了多种数据集,包括两个新的合成音频-文本数据集PARADE(用于评估ALMs避免刻板印象的能力)和CoRe-Bench(通过多轮推理问答来衡量对话音频的理解能力),以全面衡量ALMs在10个我们认为对ALMs开发和使用至关重要的方面的表现:音频感知、知识、推理、情感检测、偏见、公平性、多语言性、鲁棒性、毒性和安全性。我们还标准化了提示、推理参数和评估指标,以确保模型间的公平比较。我们测试了来自3家开发者的14个开放权重和封闭API的ALMs,以及3个额外的基础系统,每个系统由自动语音识别器和语言模型组成。结果显示,尽管Gemini 2.5 Pro在10个方面中的5个方面排名第一,但在ASR任务上表现出群体不公平性(p=0.01),而其他大多数模型则无此问题。我们还发现,基础系统在AHELM上表现相当不错,其中一个仅具备语音转文本功能的系统总体排名第五。为保持透明度,所有原始提示、模型生成和输出均可在我们的网站https://crfm.stanford.edu/helm/audio/v1.0.0上获取。AHELM旨在成为一个持续更新的基准,未来将不断添加新的数据集和模型。
English
Evaluations of audio-language models (ALMs) -- multimodal models that take interleaved audio and text as input and output text -- are hindered by the lack of standardized benchmarks; most benchmarks measure only one or two capabilities and omit evaluative aspects such as fairness or safety. Furthermore, comparison across models is difficult as separate evaluations test a limited number of models and use different prompting methods and inference parameters. To address these shortfalls, we introduce AHELM, a benchmark that aggregates various datasets -- including 2 new synthetic audio-text datasets called PARADE, which evaluates the ALMs on avoiding stereotypes, and CoRe-Bench, which measures reasoning over conversational audio through inferential multi-turn question answering -- to holistically measure the performance of ALMs across 10 aspects we have identified as important to the development and usage of ALMs: audio perception, knowledge, reasoning, emotion detection, bias, fairness, multilinguality, robustness, toxicity, and safety. We also standardize the prompts, inference parameters, and evaluation metrics to ensure equitable comparisons across models. We test 14 open-weight and closed-API ALMs from 3 developers and 3 additional simple baseline systems each consisting of an automatic speech recognizer and a language model. Our results show that while Gemini 2.5 Pro ranks top in 5 out of 10 aspects, it exhibits group unfairness (p=0.01) on ASR tasks whereas most of the other models do not. We also find that the baseline systems perform reasonably well on AHELM, with one ranking 5th overall despite having only speech-to-text capabilities. For transparency, all raw prompts, model generations, and outputs are available on our website at https://crfm.stanford.edu/helm/audio/v1.0.0. AHELM is intended to be a living benchmark and new datasets and models will be added over time.
PDF93September 1, 2025