BuildBench:大语言模型代理在编译现实世界开源软件上的基准测试
BuildBench: Benchmarking LLM Agents on Compiling Real-World Open-Source Software
September 27, 2025
作者: Zehua Zhang, Ati Priya Bajaj, Divij Handa, Siyu Liu, Arvind S Raj, Hongkai Chen, Hulin Wang, Yibo Liu, Zion Leonahenahe Basque, Souradip Nath, Vishal Juneja, Nikhil Chapre, Yan Shoshitaishvili, Adam Doupé, Chitta Baral, Ruoyu Wang
cs.AI
摘要
自动编译开源软件(OSS)项目是一项至关重要、劳动密集且复杂的任务,这使其成为大型语言模型(LLM)代理的理想挑战。现有方法依赖于手动制定的规则和工作流程,无法适应需要定制配置或环境设置的OSS。近期利用大型语言模型(LLMs)的尝试仅对部分高评分OSS进行选择性评估,这种做法低估了OSS编译的实际挑战。实践中,编译指令常常缺失,依赖关系未记录,成功的构建甚至可能需要修补源代码或修改构建脚本。我们提出了一个更具挑战性和现实性的基准测试——BUILD-BENCH,它包含质量、规模和特性更为多样化的OSS。此外,我们提出了一种强大的基于LLM的代理基线——OSS-BUILD-AGENT,这是一个高效的系统,配备了增强的构建指令检索模块,在BUILD-BENCH上实现了最先进的性能,并能适应异构OSS特性。我们还详细分析了不同编译方法设计选择及其对整个任务的影响,为未来进展提供了指导性见解。我们相信,BUILD-BENCH上的性能能够真实反映代理处理复杂软件工程任务的能力,因此,我们的基准测试将推动创新,对软件开发和软件安全领域下游应用产生重大影响。
English
Automatically compiling open-source software (OSS) projects is a vital,
labor-intensive, and complex task, which makes it a good challenge for LLM
Agents. Existing methods rely on manually curated rules and workflows, which
cannot adapt to OSS that requires customized configuration or environment
setup. Recent attempts using Large Language Models (LLMs) used selective
evaluation on a subset of highly rated OSS, a practice that underestimates the
realistic challenges of OSS compilation. In practice, compilation instructions
are often absent, dependencies are undocumented, and successful builds may even
require patching source files or modifying build scripts. We propose a more
challenging and realistic benchmark, BUILD-BENCH, comprising OSS that are more
diverse in quality, scale, and characteristics. Furthermore, we propose a
strong baseline LLM-based agent, OSS-BUILD-AGENT, an effective system with
enhanced build instruction retrieval module that achieves state-of-the-art
performance on BUILD-BENCH and is adaptable to heterogeneous OSS
characteristics. We also provide detailed analysis regarding different
compilation method design choices and their influence to the whole task,
offering insights to guide future advances. We believe performance on
BUILD-BENCH can faithfully reflect an agent's ability to tackle compilation as
a complex software engineering tasks, and, as such, our benchmark will spur
innovation with a significant impact on downstream applications in the fields
of software development and software security.