通用智能体评估
General Agent Evaluation
February 26, 2026
作者: Elron Bandel, Asaf Yehudai, Lilach Eden, Yehoshua Sagron, Yotam Perlitz, Elad Venezian, Natalia Razinkov, Natan Ergas, Shlomit Shachor Ifergan, Segev Shlomov, Michal Jacovi, Leshem Choshen, Liat Ein-Dor, Yoav Katz, Michal Shmueli-Scheuer
cs.AI
摘要
通用智能体(即在陌生环境中无需领域特定工程即可执行任务的系统)的承诺目前基本尚未实现。现有智能体多为专用系统,尽管新兴实现如OpenAI SDK智能体和Claude代码已展现出更广泛的能力,但尚未对其通用性能进行系统性评估。当前智能体基准测试均预设领域特定集成,其任务信息编码方式无法公平评估通用智能体。本文首次将通用智能体评估确立为一级研究目标,提出通用评估的概念性原则、实现智能体-基准测试统一集成的协议,以及实用化评估框架Exgentic。我们通过对六大环境中的五种主流智能体进行基准测试,创建了首个开放通用智能体排行榜。实验表明通用智能体能在多样环境中实现泛化,其性能在无需环境特定调优的情况下媲美领域专用智能体。我们公开评估协议、框架及排行榜,为通用智能体的系统性研究奠定基础。
English
The promise of general-purpose agents - systems that perform tasks in unfamiliar environments without domain-specific engineering - remains largely unrealized. Existing agents are predominantly specialized, and while emerging implementations like OpenAI SDK Agent and Claude Code hint at broader capabilities, no systematic evaluation of their general performance has been pursued. Current agentic benchmarks assume domain-specific integration, encoding task information in ways that preclude fair evaluation of general agents. This paper frames general-agent evaluation as a first-class research objective. We propose conceptual principles for such evaluation, a Unified Protocol enabling agent-benchmark integration, and Exgentic - a practical framework for general agent evaluation. We benchmark five prominent agent implementations across six environments as the first Open General Agent Leaderboard. Our experiments show that general agents generalize across diverse environments, achieving performance comparable to domain-specific agents without any environment-specific tuning. We release our evaluation protocol, framework, and leaderboard to establish a foundation for systematic research on general-purpose agents.