学会提交:通过在线仓库记忆生成有机拉取请求
Learning to Commit: Generating Organic Pull Requests via Online Repository Memory
March 27, 2026
作者: Mo Li, L. H. Xu, Qitai Tan, Ting Cao, Yunxin Liu
cs.AI
摘要
基于大语言模型的编程代理在受控基准测试中表现优异,但实际提交的拉取请求却常被项目维护者拒绝。其根本原因并非功能错误,而是缺乏有机性:生成代码往往忽略项目特定规范、重复内部API已提供的功能,并违反多年积累的隐式架构约束。仅向代理提供最新仓库快照远远不够——快照仅呈现代码库的最终状态,却无法展现达成该状态所需的仓库特定变更模式。我们提出"学习式提交"框架,通过在线仓库记忆弥合这一鸿沟。该框架对严格按时间划分的代码库进行监督式对比反思:代理首先盲目尝试解决每个历史提交问题,将其预测结果与真实代码差异比对,并将差距提炼为持续增长的技能集——这些可复用的模式涵盖编码风格、内部API使用及架构不变性。当新PR描述出现时,代理会基于这些累积技能进行代码生成,使变更植根于项目自身演进轨迹而非通用预训练先验。评估针对技能构建阶段完全未接触过的已合并真实未来PR,涵盖功能正确性、代码风格一致性、内部API复用率及修改区域合理性等多维度指标。在具有丰富提交历史的专家维护仓库上的实验表明,在线仓库记忆能有效提升未来保留任务的有机性评分。
English
Large language model (LLM)-based coding agents achieve impressive results on controlled benchmarks yet routinely produce pull requests that real maintainers reject. The root cause is not functional incorrectness but a lack of organicity: generated code ignores project-specific conventions, duplicates functionality already provided by internal APIs, and violates implicit architectural constraints accumulated over years of development. Simply exposing an agent to the latest repository snapshot is not enough: the snapshot reveals the final state of the codebase, but not the repository-specific change patterns by which that state was reached. We introduce Learning to Commit, a framework that closes this gap through Online Repository Memory. Given a repository with a strict chronological split, the agent performs supervised contrastive reflection on earlier commits: it blindly attempts to resolve each historical issue, compares its prediction against the oracle diff, and distils the gap into a continuously growing set of skills-reusable patterns capturing coding style, internal API usage, and architectural invariants. When a new PR description arrives, the agent conditions its generation on these accumulated skills, producing changes grounded in the project's own evolution rather than generic pretraining priors. Evaluation is conducted on genuinely future, merged pull requests that could not have been seen during the skill-building phase, and spans multiple dimensions including functional correctness, code-style consistency, internal API reuse rate, and modified-region plausibility. Experiments on an expert-maintained repository with rich commit history show that Online Repository Memory effectively improves organicity scores on held-out future tasks.