A2Eval:具身智能体的自主化与自动化评估体系
A2Eval: Agentic and Automated Evaluation for Embodied Brain
February 2, 2026
作者: Shuai Zhang, Jiayu Hu, Zijie Chen, Zeyuan Ding, Yi Zhang, Yingji Zhang, Ziyi Zhou, Junwei Liao, Shengjie Zhou, Yong Dai, Zhenzhong Lan, Xiaozhu Ju
cs.AI
摘要
当前具身视觉语言模型的评估主要依赖静态、专家定义且需人工标注的基准测试集,这些数据集存在严重冗余和覆盖不均衡问题。这种劳动密集型范式不仅消耗大量计算和标注资源、推高成本,还会扭曲模型排名,最终阻碍迭代开发。为解决此问题,我们提出智能体自动评估框架(A2Eval),这是首个通过双智能体协作实现基准自动构建与评估的智能体框架。数据智能体能够自主归纳能力维度并构建平衡紧凑的评估集,而评估智能体则负责合成并验证可执行的评估流程,实现全自动化的高保真评估。在10个基准测试集和13个模型上的实验表明,A2Eval能将评估集压缩85%,降低总体计算成本77%,实现4.6倍加速的同时保持评估质量。关键的是,该框架修正了系统性排名偏差,将人类对齐度提升至斯皮尔曼等级相关系数0.85,并保持高排名保真度(肯德尔系数0.81),为高保真、低成本的具身评估设立了新标准。我们的代码与数据即将公开。
English
Current embodied VLM evaluation relies on static, expert-defined, manually annotated benchmarks that exhibit severe redundancy and coverage imbalance. This labor intensive paradigm drains computational and annotation resources, inflates costs, and distorts model rankings, ultimately stifling iterative development. To address this, we propose Agentic Automatic Evaluation (A2Eval), the first agentic framework that automates benchmark curation and evaluation through two collaborative agents. The Data Agent autonomously induces capability dimensions and assembles a balanced, compact evaluation suite, while the Eval Agent synthesizes and validates executable evaluation pipelines, enabling fully autonomous, high-fidelity assessment. Evaluated across 10 benchmarks and 13 models, A2Eval compresses evaluation suites by 85%, reduces overall computational costs by 77%, and delivers a 4.6x speedup while preserving evaluation quality. Crucially, A2Eval corrects systematic ranking biases, improves human alignment to Spearman's rho=0.85, and maintains high ranking fidelity (Kendall's tau=0.81), establishing a new standard for high-fidelity, low-cost embodied assessment. Our code and data will be public soon.