自改进预训练:利用后训练模型实现更优模型的预训练
Self-Improving Pretraining: using post-trained models to pretrain better models
January 29, 2026
作者: Ellen Xiaoqing Tan, Shehzaad Dhuliawala, Jing Xu, Ping Yu, Sainbayar Sukhbaatar, Jason Weston, Olga Golovneva
cs.AI
摘要
确保大型语言模型生成内容的安全性、真实性与整体质量是一项关键挑战,尤其在模型日益广泛应用于现实场景的背景下。当前主流解决方案依赖于收集成本高昂、精心标注的数据集,并实施多阶段的微调与对齐。然而即便采用如此复杂的流程,仍无法完全纠正预训练阶段习得的不良模式。因此,在预训练阶段解决这些问题至关重要——这不仅能塑造模型的核心行为模式,更能从根本上防止不安全或虚构内容被深度内化。
针对这一挑战,我们提出一种新型预训练方法:通过流式文档处理结合强化学习技术,在每一步优化后续K个生成标记的质量。该方法利用一个经过充分后训练的强判别模型,对候选生成内容(包括模型滚动输出、原始后缀及改写后缀)进行质量、安全性与事实性评估。训练初期主要依赖原始后缀与改写后缀作为基准;随着模型能力提升,强化学习机制会奖励高质量的滚动生成结果。这种方案实现了从根源构建更优质、更安全、更符合事实的模型。
实验表明,相较于标准预训练方法,本方案在事实性与安全性指标上分别实现36.2%和18.5%的相对提升,整体生成质量的胜率改善最高达86.3%。
English
Ensuring safety, factuality and overall quality in the generations of large language models is a critical challenge, especially as these models are increasingly deployed in real-world applications. The prevailing approach to addressing these issues involves collecting expensive, carefully curated datasets and applying multiple stages of fine-tuning and alignment. However, even this complex pipeline cannot guarantee the correction of patterns learned during pretraining. Therefore, addressing these issues during pretraining is crucial, as it shapes a model's core behaviors and prevents unsafe or hallucinated outputs from becoming deeply embedded. To tackle this issue, we introduce a new pretraining method that streams documents and uses reinforcement learning (RL) to improve the next K generated tokens at each step. A strong, post-trained model judges candidate generations -- including model rollouts, the original suffix, and a rewritten suffix -- for quality, safety, and factuality. Early in training, the process relies on the original and rewritten suffixes; as the model improves, RL rewards high-quality rollouts. This approach builds higher quality, safer, and more factual models from the ground up. In experiments, our method gives 36.2% and 18.5% relative improvements over standard pretraining in terms of factuality and safety, and up to 86.3% win rate improvements in overall generation quality.