ChatPaper.aiChatPaper

迈向大语言模型后训练的统一视角

Towards a Unified View of Large Language Model Post-Training

September 4, 2025
作者: Xingtai Lv, Yuxin Zuo, Youbang Sun, Hongyi Liu, Yuntian Wei, Zhekai Chen, Lixuan He, Xuekai Zhu, Kaiyan Zhang, Bingning Wang, Ning Ding, Bowen Zhou
cs.AI

摘要

現代語言模型的訓練數據主要來源於兩大類:在線數據(模型生成的推演數據)和離線數據(人類或其他模型的示範數據)。這兩類數據通常分別被強化學習(RL)和監督微調(SFT)等方法所採用。本文中,我們展示這些方法並非相互矛盾,而是同一優化過程的不同實例。我們推導出一個統一策略梯度估計器,並將多種訓練後方法的計算呈現為在不同數據分佈假設及各種偏差-方差權衡下共同目標的梯度。該梯度估計器由四個可互換部分構成:穩定化掩碼、參考策略分母、優勢估計和似然梯度。基於我們的理論發現,我們提出了混合訓練後處理(HPT)算法,該算法能動態選擇不同的訓練信號。HPT旨在實現對示範數據的有效利用和穩定探索,同時不犧牲已學習的推理模式。我們通過大量實驗和消融研究驗證了統一理論框架和HPT的有效性。在六個數學推理基準測試和兩個分佈外測試集上,HPT在不同規模和系列的模型中均持續超越強基準線。
English
Two major sources of training data exist for post-training modern language models: online (model-generated rollouts) data, and offline (human or other-model demonstrations) data. These two types of data are typically used by approaches like Reinforcement Learning (RL) and Supervised Fine-Tuning (SFT), respectively. In this paper, we show that these approaches are not in contradiction, but are instances of a single optimization process. We derive a Unified Policy Gradient Estimator, and present the calculations of a wide spectrum of post-training approaches as the gradient of a common objective under different data distribution assumptions and various bias-variance tradeoffs. The gradient estimator is constructed with four interchangeable parts: stabilization mask, reference policy denominator, advantage estimate, and likelihood gradient. Motivated by our theoretical findings, we propose Hybrid Post-Training (HPT), an algorithm that dynamically selects different training signals. HPT is designed to yield both effective exploitation of demonstration and stable exploration without sacrificing learned reasoning patterns. We provide extensive experiments and ablation studies to verify the effectiveness of our unified theoretical framework and HPT. Across six mathematical reasoning benchmarks and two out-of-distribution suites, HPT consistently surpasses strong baselines across models of varying scales and families.
PDF406September 5, 2025