ChatPaper.aiChatPaper

SWE-CI:通过持续集成评估智能体维护代码库的能力

SWE-CI: Evaluating Agent Capabilities in Maintaining Codebases via Continuous Integration

March 4, 2026
作者: Jialong Chen, Xander Xu, Hu Wei, Chuan Chen, Bing Zhao
cs.AI

摘要

基于大语言模型的智能体在自动化软件工程任务(如静态缺陷修复)方面已展现出强大能力,SWE-bench等基准测试便是明证。然而在现实场景中,成熟软件的开发通常以复杂的需求变更和长期功能迭代为基础——这一动态过程是静态、一次性修复范式所无法捕捉的。为弥补这一差距,我们提出SWE-CI:首个基于持续集成循环的仓库级基准测试,旨在将代码生成的评估范式从静态短期功能正确性转向动态长期可维护性。该基准包含100项任务,每项任务平均对应真实代码库中跨度233天、连续71次提交的演进历史。SWE-CI要求智能体通过数十轮分析与编码迭代系统化解决这些任务,为评估智能体在长期演进过程中维持代码质量的能力提供了重要视角。
English
Large language model (LLM)-powered agents have demonstrated strong capabilities in automating software engineering tasks such as static bug fixing, as evidenced by benchmarks like SWE-bench. However, in the real world, the development of mature software is typically predicated on complex requirement changes and long-term feature iterations -- a process that static, one-shot repair paradigms fail to capture. To bridge this gap, we propose SWE-CI, the first repository-level benchmark built upon the Continuous Integration loop, aiming to shift the evaluation paradigm for code generation from static, short-term functional correctness toward dynamic, long-term maintainability. The benchmark comprises 100 tasks, each corresponding on average to an evolution history spanning 233 days and 71 consecutive commits in a real-world code repository. SWE-CI requires agents to systematically resolve these tasks through dozens of rounds of analysis and coding iterations. SWE-CI provides valuable insights into how well agents can sustain code quality throughout long-term evolution.
PDF32March 6, 2026