SWE-CI:基于持续集成的代码库维护智能体能力评估框架
SWE-CI: Evaluating Agent Capabilities in Maintaining Codebases via Continuous Integration
March 4, 2026
作者: Jialong Chen, Xander Xu, Hu Wei, Chuan Chen, Bing Zhao
cs.AI
摘要
基於大型語言模型(LLM)的智能體在自動化軟體工程任務(如靜態缺陷修復)方面已展現出強大能力,SWE-bench等基準測試便是有力證明。然而在現實世界中,成熟軟體的開發通常依賴於複雜的需求變更與長期的功能迭代——這一動態過程是靜態單次修復範式所無法捕捉的。為彌合此鴻溝,我們提出首個基於持續集成循環的倉庫級基準測試SWE-CI,旨在將程式碼生成的評估範式從靜態短期功能正確性轉向動態長期可維護性。該基準包含100項任務,每項任務平均對應真實程式碼倉庫中長達233天、包含71次連續提交的演進歷史。SWE-CI要求智能體通過數十輪分析與編碼迭代系統性地解決這些任務,從而為評估智能體在長期演進過程中維持程式碼品質的能力提供重要視角。
English
Large language model (LLM)-powered agents have demonstrated strong capabilities in automating software engineering tasks such as static bug fixing, as evidenced by benchmarks like SWE-bench. However, in the real world, the development of mature software is typically predicated on complex requirement changes and long-term feature iterations -- a process that static, one-shot repair paradigms fail to capture. To bridge this gap, we propose SWE-CI, the first repository-level benchmark built upon the Continuous Integration loop, aiming to shift the evaluation paradigm for code generation from static, short-term functional correctness toward dynamic, long-term maintainability. The benchmark comprises 100 tasks, each corresponding on average to an evolution history spanning 233 days and 71 consecutive commits in a real-world code repository. SWE-CI requires agents to systematically resolve these tasks through dozens of rounds of analysis and coding iterations. SWE-CI provides valuable insights into how well agents can sustain code quality throughout long-term evolution.