OpenCoder:顶级代码大语言模型的开放指南

OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models

November 7, 2024
作者: Siming Huang, Tianhao Cheng, Jason Klein Liu, Jiaran Hao, Liuyihan Song, Yang Xu, J. Yang, J. H. Liu, Chenchen Zhang, Linzheng Chai, Ruifeng Yuan, Zhaoxiang Zhang, Jie Fu, Qian Liu, Ge Zhang, Zili Wang, Yuan Qi, Yinghui Xu, Wei Chu
cs.AI

摘要

代码的大型语言模型(LLMs)已经成为各个领域不可或缺的工具,包括代码生成、推理任务和代理系统。虽然开放获取的代码LLMs正逐渐接近专有模型的性能水平,但适用于严谨科学研究的高质量代码LLMs仍然有限,尤其是那些具有可重现数据处理流程和透明训练协议的模型。这种稀缺性是由于各种挑战,包括资源限制、伦理考虑以及保持模型领先地位的竞争优势。为了填补这一空白,我们介绍了OpenCoder,这是一个顶尖的代码LLM,不仅实现了与领先模型可比的性能,而且还作为研究社区的“开放菜谱”。与大多数先前的努力不同,我们不仅发布模型权重和推理代码,还发布可重现的训练数据、完整的数据处理流程、严格的实验消融结果以及详细的训练协议,以支持开放的科学研究。通过这一全面的发布,我们确定了构建顶尖代码LLM的关键要素:(1)针对数据清洗的代码优化启发式规则和数据去重方法,(2)与代码相关的文本语料库的回忆,以及(3)在退火和监督微调阶段都具有高质量的合成数据。通过提供这种程度的开放性,我们旨在扩大对顶尖代码LLM各个方面的访问,OpenCoder既是一个强大的模型,也是一个开放基础,以加速研究并促进代码人工智能领域的可重复进展。
English
Large language models (LLMs) for code have become indispensable in various domains, including code generation, reasoning tasks and agent systems.While open-access code LLMs are increasingly approaching the performance levels of proprietary models, high-quality code LLMs suitable for rigorous scientific investigation, particularly those with reproducible data processing pipelines and transparent training protocols, remain limited. The scarcity is due to various challenges, including resource constraints, ethical considerations, and the competitive advantages of keeping models advanced. To address the gap, we introduce OpenCoder, a top-tier code LLM that not only achieves performance comparable to leading models but also serves as an ``open cookbook'' for the research community. Unlike most prior efforts, we release not only model weights and inference code, but also the reproducible training data, complete data processing pipeline, rigorous experimental ablation results, and detailed training protocols for open scientific research. Through this comprehensive release, we identify the key ingredients for building a top-tier code LLM: (1) code optimized heuristic rules for data cleaning and methods for data deduplication, (2) recall of text corpus related to code and (3) high-quality synthetic data in both annealing and supervised fine-tuning stages. By offering this level of openness, we aim to broaden access to all aspects of a top-tier code LLM, with OpenCoder serving as both a powerful model and an open foundation to accelerate research, and enable reproducible advancements in code AI.

Summary

AI-Generated Summary

PDF1086November 13, 2024