自改进预训练:利用后训练模型实现更优模型的预训练
Self-Improving Pretraining: using post-trained models to pretrain better models
January 29, 2026
作者: Ellen Xiaoqing Tan, Shehzaad Dhuliawala, Jing Xu, Ping Yu, Sainbayar Sukhbaatar, Jason Weston, Olga Golovneva
cs.AI
摘要
确保大型语言模型生成内容的安全性、真实性和整体质量是一项关键挑战,尤其在模型日益广泛应用于现实场景的背景下。当前主流解决方案需要耗费巨资收集精心标注的数据集,并实施多阶段的微调与对齐处理。然而即便采用如此复杂的流程,仍无法完全修正模型在预训练阶段习得的不良模式。因此,在塑造模型核心行为的预训练阶段解决这些问题至关重要,这能从根本上防止不安全或虚构内容输出模式的固化。
针对这一难题,我们提出了一种新型预训练方法:通过流式文档处理结合强化学习技术,逐步优化后续K个生成标记的质量。该方法利用经过充分训练的后训练模型作为评判者,对包括模型滚动输出、原始后缀及改写后缀在内的候选生成内容,从质量、安全性和事实性三个维度进行综合评估。在训练初期,系统主要依赖原始后缀与改写后缀作为参照;随着模型性能提升,强化学习机制开始对高质量滚动输出给予奖励。这种设计实现了从底层构建更优质、更安全、更符合事实的模型。
实验结果表明,相较于标准预训练方法,本方案在事实准确性方面实现36.2%的相对提升,安全性提升达18.5%,在整体生成质量评估中最高获得86.3%的胜率提升。
English
Ensuring safety, factuality and overall quality in the generations of large language models is a critical challenge, especially as these models are increasingly deployed in real-world applications. The prevailing approach to addressing these issues involves collecting expensive, carefully curated datasets and applying multiple stages of fine-tuning and alignment. However, even this complex pipeline cannot guarantee the correction of patterns learned during pretraining. Therefore, addressing these issues during pretraining is crucial, as it shapes a model's core behaviors and prevents unsafe or hallucinated outputs from becoming deeply embedded. To tackle this issue, we introduce a new pretraining method that streams documents and uses reinforcement learning (RL) to improve the next K generated tokens at each step. A strong, post-trained model judges candidate generations -- including model rollouts, the original suffix, and a rewritten suffix -- for quality, safety, and factuality. Early in training, the process relies on the original and rewritten suffixes; as the model improves, RL rewards high-quality rollouts. This approach builds higher quality, safer, and more factual models from the ground up. In experiments, our method gives 36.2% and 18.5% relative improvements over standard pretraining in terms of factuality and safety, and up to 86.3% win rate improvements in overall generation quality.