ChatPaper.aiChatPaper

daVinci-LLM:迈向预训练科学化之路

daVinci-LLM:Towards the Science of Pretraining

March 28, 2026
作者: Yiwei Qin, Yixiu Liu, Tiantian Mi, Muhang Xie, Zhen Huang, Weiye Si, Pengrui Lu, Siyuan Feng, Xia Wu, Liming Liu, Ye Luo, Jinlong Hou, Qipeng Guo, Yu Qiao, Pengfei Liu
cs.AI

摘要

基础预训练阶段决定了模型的能力上限,因为后期训练难以突破预训练建立的能力基础,但这一关键领域仍处于探索不足的状态。这源于结构性矛盾:拥有算力资源的机构受商业压力制约难以透明公开,而学术机构虽具研究自由却缺乏预训练级算力资源。daVinci-LLM正占据这一空白地带,将工业级资源与完全的研究自由相结合以推动预训练科学。我们采用全开放范式,将开放性视为科学方法论,完整公开数据处理流水线、训练全过程及系统性探索成果。针对该领域缺乏系统化数据方法的现状,我们采用"数据达尔文主义"框架——一套从过滤到合成的L0-L9级分层原则体系。基于8T词符规模,我们通过两阶段自适应课程从随机初始化训练出30亿参数模型,逐步从基础能力过渡到推理增强。经过200多项对照实验发现:数据处理深度能系统提升能力,使其成为与数据规模并列的关键维度;不同领域呈现异质饱和规律,需采用从比例调整到格式转换的自适应策略;组合平衡可实现针对性强化并避免性能坍塌;评估方案的选择深刻影响对预训练进展的认知。通过完整公开探索过程,我们推动学界基于发现与系统方法论形成预训练领域的累积性科学认知。
English
The foundational pretraining phase determines a model's capability ceiling, as post-training struggles to overcome capability foundations established during pretraining, yet it remains critically under-explored. This stems from a structural paradox: organizations with computational resources operate under commercial pressures that inhibit transparent disclosure, while academic institutions possess research freedom but lack pretraining-scale computational resources. daVinci-LLM occupies this unexplored intersection, combining industrial-scale resources with full research freedom to advance the science of pretraining. We adopt a fully-open paradigm that treats openness as scientific methodology, releasing complete data processing pipelines, full training processes, and systematic exploration results. Recognizing that the field lacks systematic methodology for data processing, we employ the Data Darwinism framework, a principled L0-L9 taxonomy from filtering to synthesis. We train a 3B-parameter model from random initialization across 8T tokens using a two-stage adaptive curriculum that progressively shifts from foundational capabilities to reasoning-intensive enhancement. Through 200+ controlled ablations, we establish that: processing depth systematically enhances capabilities, establishing it as a critical dimension alongside volume scaling; different domains exhibit distinct saturation dynamics, necessitating adaptive strategies from proportion adjustments to format shifts; compositional balance enables targeted intensification while preventing performance collapse; how evaluation protocol choices shape our understanding of pretraining progress. By releasing the complete exploration process, we enable the community to build upon our findings and systematic methodologies to form accumulative scientific knowledge in pretraining.
PDF231April 2, 2026