ChatPaper.aiChatPaper

關於項目級代碼補全的預訓練研究

On Pretraining for Project-Level Code Completion

October 15, 2025
作者: Maksim Sapronov, Evgeniy Glukhov
cs.AI

摘要

倉庫級別的預訓練常被用於使大型語言模型能夠利用代碼庫範圍內的上下文,從而提升其生成準確且上下文感知的代碼補全能力。在本研究中,我們探討了不同的倉庫處理策略如何影響OpenCoder(一個擁有15億參數的模型)的上下文學習。通過在額外10億個經過篩選的倉庫級別數據上進行訓練,我們將其上下文窗口從4,096個標記擴展至16,384個標記。儘管依賴的數據集規模小於競爭模型(後者通常使用數千億個標記),我們的模型在Long Code Arena基準測試中仍表現出相當的性能。我們發現,多種倉庫處理技術均能帶來相似的良好效果,其中主要的性能提升來自於適應新的旋轉位置嵌入(RoPE)縮放參數。最後,我們展示了一種在原始序列長度下更為簡單的文件級別訓練方法依然非常有效,這為在數據和計算資源更為受限的環境下開展倉庫級別代碼補全研究開闢了道路。
English
Repository-level pretraining is commonly used to enable large language models for code to leverage codebase-wide context. This enhances their ability to generate accurate and context-aware code completions. In this work, we investigate how different repository-processing strategies affect in-context learning in OpenCoder, a 1.5B-parameter model. We extend its context window from 4,096 to 16,384 tokens by training on additional 1B tokens of curated repository-level data. Despite relying on a smaller dataset than competing models (which often use hundreds of billions of tokens), our model achieves comparable performance on the Long Code Arena benchmark. We find that various repository-processing techniques yield similarly strong results, with the primary gain coming from adapting to a new rotary positional embedding (RoPE) scaling parameter. Finally, we show that a simpler file-level training approach at the original sequence length remains highly effective, opening up repository-level code completion research to settings with more constrained data and compute resources.
PDF52October 17, 2025