ChatPaper.aiChatPaper

EvoClaw:人工智能代理在持续软件演化中的评估框架

EvoClaw: Evaluating AI Agents on Continuous Software Evolution

March 13, 2026
作者: Gangda Deng, Zhaoling Chen, Zhongming Yu, Haoyang Fan, Yuhong Liu, Yuxin Yang, Dhruv Parikh, Rajgopal Kannan, Le Cong, Mengdi Wang, Qian Zhang, Viktor Prasanna, Xiangru Tang, Xingyao Wang
cs.AI

摘要

随着AI智能体日益作为长期运行系统被部署,自主构建并持续演进定制化软件以实现在动态环境中交互变得至关重要。然而,现有基准测试仅针对孤立的单次编码任务评估智能体,忽略了现实软件演进中固有的时序依赖性和技术债务。为弥补这一差距,我们提出DeepCommit——一种能从含噪提交日志中重构可验证里程碑有向无环图的智能体流程,其中里程碑被定义为语义连贯的开发目标。这些可执行序列支撑了EvoClaw这一新型基准测试,要求智能体在长期软件演进中维持系统完整性并控制错误累积,这两个维度在当前基准测试中普遍缺失。通过对4种智能体框架下的12个前沿模型进行评估,我们发现一个关键缺陷:整体性能得分从孤立任务中的>80%骤降至持续设置下的最高38%,暴露出智能体在长期维护和错误传播方面存在严重不足。
English
With AI agents increasingly deployed as long-running systems, it becomes essential to autonomously construct and continuously evolve customized software to enable interaction within dynamic environments. Yet, existing benchmarks evaluate agents on isolated, one-off coding tasks, neglecting the temporal dependencies and technical debt inherent in real-world software evolution. To bridge this gap, we introduce DeepCommit, an agentic pipeline that reconstructs verifiable Milestone DAGs from noisy commit logs, where milestones are defined as semantically cohesive development goals. These executable sequences enable EvoClaw, a novel benchmark that requires agents to sustain system integrity and limit error accumulation, dimensions of long-term software evolution largely missing from current benchmarks. Our evaluation of 12 frontier models across 4 agent frameworks reveals a critical vulnerability: overall performance scores drop significantly from >80% on isolated tasks to at most 38% in continuous settings, exposing agents' profound struggle with long-term maintenance and error propagation.
PDF31March 18, 2026