OpenCoder:頂尖程式碼大型語言模型的開放式指南

OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models

November 7, 2024
作者: Siming Huang, Tianhao Cheng, Jason Klein Liu, Jiaran Hao, Liuyihan Song, Yang Xu, J. Yang, J. H. Liu, Chenchen Zhang, Linzheng Chai, Ruifeng Yuan, Zhaoxiang Zhang, Jie Fu, Qian Liu, Ge Zhang, Zili Wang, Yuan Qi, Yinghui Xu, Wei Chu
cs.AI

摘要

程式碼的大型語言模型(LLMs)在各個領域變得不可或缺,包括程式碼生成、推理任務和代理系統。雖然開放訪問的程式碼LLMs越來越接近專有模型的性能水平,但適用於嚴謹科學研究的高質量程式碼LLMs仍然有限,特別是那些具有可重現數據處理流程和透明訓練協議的模型。這種稀缺性是由於各種挑戰,包括資源限制、道德考量以及保持模型先進性的競爭優勢。為了填補這一差距,我們引入了OpenCoder,這是一個頂尖的程式碼LLM,不僅實現了與領先模型可比擬的性能,還為研究社區提供了一個“開放的食譜”。與大多數先前的努力不同,我們不僅釋放模型權重和推理程式碼,還釋放可重現的訓練數據、完整的數據處理流程、嚴格的實驗消融結果以及詳細的訓練協議,以支持開放的科學研究。通過這一全面釋放,我們確定了構建頂尖程式碼LLM的關鍵要素:(1)針對數據清理的程式碼優化啟發式規則和數據去重的方法,(2)與程式碼相關的文本語料的召回,以及(3)在退火和監督微調階段都有高質量的合成數據。通過提供這種程度的開放性,我們旨在擴大對頂尖程式碼LLM各個方面的訪問,OpenCoder既是一個強大的模型,也是一個開放的基礎,以加速研究並實現程式碼人工智能領域的可重現進步。
English
Large language models (LLMs) for code have become indispensable in various domains, including code generation, reasoning tasks and agent systems.While open-access code LLMs are increasingly approaching the performance levels of proprietary models, high-quality code LLMs suitable for rigorous scientific investigation, particularly those with reproducible data processing pipelines and transparent training protocols, remain limited. The scarcity is due to various challenges, including resource constraints, ethical considerations, and the competitive advantages of keeping models advanced. To address the gap, we introduce OpenCoder, a top-tier code LLM that not only achieves performance comparable to leading models but also serves as an ``open cookbook'' for the research community. Unlike most prior efforts, we release not only model weights and inference code, but also the reproducible training data, complete data processing pipeline, rigorous experimental ablation results, and detailed training protocols for open scientific research. Through this comprehensive release, we identify the key ingredients for building a top-tier code LLM: (1) code optimized heuristic rules for data cleaning and methods for data deduplication, (2) recall of text corpus related to code and (3) high-quality synthetic data in both annealing and supervised fine-tuning stages. By offering this level of openness, we aim to broaden access to all aspects of a top-tier code LLM, with OpenCoder serving as both a powerful model and an open foundation to accelerate research, and enable reproducible advancements in code AI.

Summary

AI-Generated Summary

PDF1086November 13, 2024