统计拒绝抽样改善偏好优化
Statistical Rejection Sampling Improves Preference Optimization
September 13, 2023
作者: Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mohammad Saleh, Peter J. Liu, Jialu Liu
cs.AI
摘要
改善语言模型与人类偏好的对齐仍然是一个活跃的研究挑战。先前的方法主要利用强化学习从人类反馈中学习(RLHF),通过在线RL方法,如近端策略优化(PPO)。最近,离线方法,如序列可能性校准(SLiC)和直接偏好优化(DPO),作为有吸引力的替代方案出现,提供了在稳定性和可扩展性方面的改进,同时保持竞争性能。SLiC使用从经过监督微调(SFT)策略中采样的序列对来优化其损失函数,而DPO直接根据偏好数据优化语言模型,无需单独的奖励模型。然而,目标最优策略的最大似然估计器(MLE)需要从该策略中采样的带标签偏好对。DPO缺乏奖励模型限制了其从最优策略中采样偏好对的能力,而SLiC仅限于从SFT策略中采样偏好对。为了解决这些限制,我们引入了一种名为统计拒绝采样优化(RSO)的新方法,旨在利用拒绝采样从目标最优策略中获取偏好数据,从而更准确地估计最优策略。我们还提出了一个统一框架,从偏好建模的角度增强了SLiC和DPO中使用的损失函数。通过在三个不同任务上进行广泛实验,我们展示了RSO在大型语言模型(LLM)和人类评分者评估中始终优于SLiC和DPO。
English
Improving the alignment of language models with human preferences remains an
active research challenge. Previous approaches have primarily utilized
Reinforcement Learning from Human Feedback (RLHF) via online RL methods such as
Proximal Policy Optimization (PPO). Recently, offline methods such as Sequence
Likelihood Calibration (SLiC) and Direct Preference Optimization (DPO) have
emerged as attractive alternatives, offering improvements in stability and
scalability while maintaining competitive performance. SLiC refines its loss
function using sequence pairs sampled from a supervised fine-tuned (SFT)
policy, while DPO directly optimizes language models based on preference data,
foregoing the need for a separate reward model. However, the maximum likelihood
estimator (MLE) of the target optimal policy requires labeled preference pairs
sampled from that policy. DPO's lack of a reward model constrains its ability
to sample preference pairs from the optimal policy, and SLiC is restricted to
sampling preference pairs only from the SFT policy. To address these
limitations, we introduce a novel approach called Statistical Rejection
Sampling Optimization (RSO) that aims to source preference data from the target
optimal policy using rejection sampling, enabling a more accurate estimation of
the optimal policy. We also propose a unified framework that enhances the loss
functions used in both SLiC and DPO from a preference modeling standpoint.
Through extensive experiments across three diverse tasks, we demonstrate that
RSO consistently outperforms both SLiC and DPO on evaluations from both Large
Language Model (LLM) and human raters.