ChatPaper.aiChatPaper

AJ-Bench:面向環境感知評估的智能體即法官基準測試框架

AJ-Bench: Benchmarking Agent-as-a-Judge for Environment-Aware Evaluation

April 20, 2026
作者: Wentao Shi, Yu Wang, Yuyang Zhao, Yuxin Chen, Fuli Feng, Xueyuan Hao, Xi Su, Qi Gu, Hui Su, Xunliang Cai, Xiangnan He
cs.AI

摘要

隨著強化學習持續擴大基於大型語言模型的智能體訓練規模,在複雜環境中可靠驗證智能體行為已變得日益困難。現有方法依賴基於規則的驗證器或「LLM即法官」模型,但這些方法難以突破狹窄領域的泛化能力。「智能體即法官」方法通過主動與環境及工具互動來獲取可驗證證據,從而解決此局限,然其能力仍待深入探索。 我們提出AJ-Bench基準測試,系統性評估「智能體即法官」在三大領域的表現——搜尋引擎、數據系統與圖形用戶界面,共包含155項任務與516條註解軌跡。該基準全面評估法官智能體在信息獲取、狀態驗證與流程驗證三方面的能力。實驗結果顯示相較於「LLM即法官」基線模型有穩定性能提升,同時揭示基於智能體驗證仍存在重大挑戰。我們的數據與程式碼公開於https://aj-bench.github.io/。
English
As reinforcement learning continues to scale the training of large language model-based agents, reliably verifying agent behaviors in complex environments has become increasingly challenging. Existing approaches rely on rule-based verifiers or LLM-as-a-Judge models, which struggle to generalize beyond narrow domains. Agent-as-a-Judge addresses this limitation by actively interacting with environments and tools to acquire verifiable evidence, yet its capabilities remain underexplored. We introduce a benchmark AJ-Bench to systematically evaluate Agent-as-a-Judge across three domains-search, data systems, and graphical user interfaces-comprising 155 tasks and 516 annotated trajectories. The benchmark comprehensively assesses judge agents' abilities in information acquisition, state verification, and process verification. Experiments demonstrate consistent performance gains over LLM-as-a-Judge baselines, while also revealing substantial open challenges in agent-based verification. Our data and code are available at https://aj-bench.github.io/.
PDF111April 23, 2026