ChatPaper.aiChatPaper

AACR-Bench:基于整体仓库级上下文的自动代码评审评估框架

AACR-Bench: Evaluating Automatic Code Review with Holistic Repository-Level Context

January 27, 2026
作者: Lei Zhang, Yongda Yu, Minghui Yu, Xinxin Guo, Zhengqi Zhuang, Guoping Rong, Dong Shao, Haifeng Shen, Hongyu Kuang, Zhengfeng Li, Boge Wang, Guoan Zhang, Bangyu Xiang, Xiaobin Xu
cs.AI

摘要

高质量评估基准对于在自动化代码审查(ACR)中部署大语言模型(LLM)至关重要。然而,现有基准存在两个关键局限:其一,在仓库级场景中缺乏多语言支持,限制了评估结果的普适性;其二,依赖从原始拉取请求(PR)评论中提取的嘈杂且不完整的真实标签,制约了问题检测的范围。为应对这些挑战,我们推出AACR-Bench——一个支持多编程语言全跨文件上下文覆盖的综合基准。与传统数据集不同,AACR-Bench采用"AI辅助、专家验证"的标注流程,能够发现原始PR中常被忽略的潜在缺陷,使缺陷覆盖量提升285%。基于AACR-Bench对主流LLM的广泛评估表明,由于数据局限性,既往评估可能误判或仅部分捕捉了模型能力。我们的工作为ACR评估建立了更严格的标准,并针对基于LLM的ACR提出新洞见:上下文粒度/层级与检索方法的选择会显著影响ACR性能,且这种影响因LLM类型、编程语言及LLM使用范式(如是否采用智能体架构)而异。评估集的代码、数据及其他资源已开源:https://github.com/alibaba/aacr-bench。
English
High-quality evaluation benchmarks are pivotal for deploying Large Language Models (LLMs) in Automated Code Review (ACR). However, existing benchmarks suffer from two critical limitations: first, the lack of multi-language support in repository-level contexts, which restricts the generalizability of evaluation results; second, the reliance on noisy, incomplete ground truth derived from raw Pull Request (PR) comments, which constrains the scope of issue detection. To address these challenges, we introduce AACR-Bench a comprehensive benchmark that provides full cross-file context across multiple programming languages. Unlike traditional datasets, AACR-Bench employs an "AI-assisted, Expert-verified" annotation pipeline to uncover latent defects often overlooked in original PRs, resulting in a 285% increase in defect coverage. Extensive evaluations of mainstream LLMs on AACR-Bench reveal that previous assessments may have either misjudged or only partially captured model capabilities due to data limitations. Our work establishes a more rigorous standard for ACR evaluation and offers new insights on LLM based ACR, i.e., the granularity/level of context and the choice of retrieval methods significantly impact ACR performance, and this influence varies depending on the LLM, programming language, and the LLM usage paradigm e.g., whether an Agent architecture is employed. The code, data, and other artifacts of our evaluation set are available at https://github.com/alibaba/aacr-bench .
PDF152February 8, 2026