ChatPaper.aiChatPaper

MMAU-Pro:一个全面评估音频通用智能的挑战性基准

MMAU-Pro: A Challenging and Comprehensive Benchmark for Holistic Evaluation of Audio General Intelligence

August 19, 2025
作者: Sonal Kumar, Šimon Sedláček, Vaibhavi Lokegaonkar, Fernando López, Wenyi Yu, Nishit Anand, Hyeonggon Ryu, Lichang Chen, Maxim Plička, Miroslav Hlaváček, William Fineas Ellingwood, Sathvik Udupa, Siyuan Hou, Allison Ferner, Sara Barahona, Cecilia Bolaños, Satish Rahi, Laura Herrera-Alarcón, Satvik Dixit, Siddhi Patil, Soham Deshmukh, Lasha Koroshinadze, Yao Liu, Leibny Paola Garcia Perera, Eleni Zanou, Themos Stafylakis, Joon Son Chung, David Harwath, Chao Zhang, Dinesh Manocha, Alicia Lozano-Diez, Santosh Kesiraju, Sreyan Ghosh, Ramani Duraiswami
cs.AI

摘要

音频理解——包括语音、非语音声音及音乐——是实现人类水平智能的关键要素。因此,AI智能体必须具备全面的音频理解能力,方能被视为具备通用智能。然而,全面评估听觉智能仍面临挑战。为填补这一空白,我们推出了MMAU-Pro,这是迄今为止最为全面且精心构建的基准测试,用于评估AI系统的音频智能。MMAU-Pro包含5,305个实例,每个实例均配有一个或多个音频,并附有由人类专家生成的问答对,覆盖语音、声音、音乐及其组合。与现有基准不同,MMAU-Pro在49项独特技能及多个复杂维度上评估听觉智能,包括长音频理解、空间音频推理、多音频理解等。所有问题均精心设计,要求进行深思熟虑的多步推理,题型涵盖多项选择与开放式回答。重要的是,音频数据直接“源自现实世界”,而非来自已知分布的现有数据集。我们对22个领先的开源与专有多模态AI模型进行了评估,揭示了显著局限:即便是Gemini 2.5 Flash和Audio Flamingo 3等最先进模型,其准确率也仅分别达到59.2%和51.7%,在多个类别中接近随机表现。我们的深入分析指出了具体不足,并提供了新颖见解,为社区提升未来AI系统向音频通用智能迈进提供了可操作的视角。基准测试与代码可在https://sonalkum.github.io/mmau-pro获取。
English
Audio comprehension-including speech, non-speech sounds, and music-is essential for achieving human-level intelligence. Consequently, AI agents must demonstrate holistic audio understanding to qualify as generally intelligent. However, evaluating auditory intelligence comprehensively remains challenging. To address this gap, we introduce MMAU-Pro, the most comprehensive and rigorously curated benchmark for assessing audio intelligence in AI systems. MMAU-Pro contains 5,305 instances, where each instance has one or more audios paired with human expert-generated question-answer pairs, spanning speech, sound, music, and their combinations. Unlike existing benchmarks, MMAU-Pro evaluates auditory intelligence across 49 unique skills and multiple complex dimensions, including long-form audio comprehension, spatial audio reasoning, multi-audio understanding, among others. All questions are meticulously designed to require deliberate multi-hop reasoning, including both multiple-choice and open-ended response formats. Importantly, audio data is sourced directly ``from the wild" rather than from existing datasets with known distributions. We evaluate 22 leading open-source and proprietary multimodal AI models, revealing significant limitations: even state-of-the-art models such as Gemini 2.5 Flash and Audio Flamingo 3 achieve only 59.2% and 51.7% accuracy, respectively, approaching random performance in multiple categories. Our extensive analysis highlights specific shortcomings and provides novel insights, offering actionable perspectives for the community to enhance future AI systems' progression toward audio general intelligence. The benchmark and code is available at https://sonalkum.github.io/mmau-pro.
PDF31August 20, 2025