ChatPaper.aiChatPaper

AJ-Bench:面向环境感知评估的智能体即裁判基准测试框架

AJ-Bench: Benchmarking Agent-as-a-Judge for Environment-Aware Evaluation

April 20, 2026
作者: Wentao Shi, Yu Wang, Yuyang Zhao, Yuxin Chen, Fuli Feng, Xueyuan Hao, Xi Su, Qi Gu, Hui Su, Xunliang Cai, Xiangnan He
cs.AI

摘要

随着强化学习不断扩展基于大语言模型的智能体训练规模,在复杂环境中可靠验证智能体行为已变得日益困难。现有方法依赖基于规则的验证器或"LLM即裁判"模型,这些方案难以突破狭窄领域的局限。"智能体即裁判"通过主动与环境及工具交互获取可验证证据来应对这一局限,但其能力仍有待深入探索。 我们提出基准测试AJ-Bench,系统化评估"智能体即裁判"在搜索、数据系统和图形用户界面三大领域的表现,包含155项任务和516条标注轨迹。该基准全面评估裁判智能体在信息获取、状态验证和流程验证三方面的能力。实验表明该方法相较"LLM即裁判"基线取得稳定性能提升,同时揭示了基于智能体的验证仍面临重大挑战。数据与代码已开源:https://aj-bench.github.io/。
English
As reinforcement learning continues to scale the training of large language model-based agents, reliably verifying agent behaviors in complex environments has become increasingly challenging. Existing approaches rely on rule-based verifiers or LLM-as-a-Judge models, which struggle to generalize beyond narrow domains. Agent-as-a-Judge addresses this limitation by actively interacting with environments and tools to acquire verifiable evidence, yet its capabilities remain underexplored. We introduce a benchmark AJ-Bench to systematically evaluate Agent-as-a-Judge across three domains-search, data systems, and graphical user interfaces-comprising 155 tasks and 516 annotated trajectories. The benchmark comprehensively assesses judge agents' abilities in information acquisition, state verification, and process verification. Experiments demonstrate consistent performance gains over LLM-as-a-Judge baselines, while also revealing substantial open challenges in agent-based verification. Our data and code are available at https://aj-bench.github.io/.
PDF111April 23, 2026