关于项目级代码补全的预训练研究
On Pretraining for Project-Level Code Completion
October 15, 2025
作者: Maksim Sapronov, Evgeniy Glukhov
cs.AI
摘要
仓库级预训练常被用于使大型代码语言模型能够利用整个代码库的上下文信息,从而提升其生成准确且上下文感知的代码补全能力。在本研究中,我们探讨了不同的仓库处理策略如何影响OpenCoder(一个拥有15亿参数的模型)的上下文学习效果。通过额外训练10亿个精选的仓库级数据标记,我们将其上下文窗口从4096扩展至16384个标记。尽管相较于使用数百亿标记的竞争模型,我们的模型依赖的数据集规模较小,但在Long Code Arena基准测试中仍展现出相当的性能。我们发现,多种仓库处理技术均能带来相似强度的效果提升,其中主要的增益来源于适应新的旋转位置嵌入(RoPE)缩放参数。最后,我们证明,在原始序列长度下采用更简单的文件级训练方法依然非常有效,这为在数据和计算资源更为受限的环境下开展仓库级代码补全研究开辟了道路。
English
Repository-level pretraining is commonly used to enable large language models
for code to leverage codebase-wide context. This enhances their ability to
generate accurate and context-aware code completions. In this work, we
investigate how different repository-processing strategies affect in-context
learning in OpenCoder, a 1.5B-parameter model. We extend its context window
from 4,096 to 16,384 tokens by training on additional 1B tokens of curated
repository-level data. Despite relying on a smaller dataset than competing
models (which often use hundreds of billions of tokens), our model achieves
comparable performance on the Long Code Arena benchmark. We find that various
repository-processing techniques yield similarly strong results, with the
primary gain coming from adapting to a new rotary positional embedding (RoPE)
scaling parameter. Finally, we show that a simpler file-level training approach
at the original sequence length remains highly effective, opening up
repository-level code completion research to settings with more constrained
data and compute resources.