ChatPaper.aiChatPaper

統計拒絕採樣改善偏好優化

Statistical Rejection Sampling Improves Preference Optimization

September 13, 2023
作者: Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mohammad Saleh, Peter J. Liu, Jialu Liu
cs.AI

摘要

改善語言模型與人類偏好的對齊仍然是一個活躍的研究挑戰。先前的方法主要利用從人類反饋中的強化學習(RLHF),通過諸如Proximal Policy Optimization(PPO)之類的在線RL方法。最近,離線方法,如Sequence Likelihood Calibration(SLiC)和Direct Preference Optimization(DPO),作為有吸引力的替代方案出現,提供了穩定性和可擴展性的改進,同時保持了競爭性能。SLiC使用從監督微調(SFT)策略中採樣的序列對來優化其損失函數,而DPO則根據偏好數據直接優化語言模型,無需單獨的獎勵模型。然而,目標最優策略的最大似然估計器(MLE)需要從該策略中採樣的標記偏好對。DPO缺乏獎勵模型限制了其從最優策略中採樣偏好對的能力,而SLiC僅限於從SFT策略中採樣偏好對。為了解決這些限制,我們提出了一種名為統計拒絕採樣優化(RSO)的新方法,旨在使用拒絕採樣從目標最優策略中獲取偏好數據,從而更準確地估計最優策略。我們還提出了一個統一框架,從偏好建模的角度增強了SLiC和DPO中使用的損失函數。通過在三個不同任務上進行的大量實驗,我們展示了RSO在大型語言模型(LLM)和人類評分者的評估中始終優於SLiC和DPO。
English
Improving the alignment of language models with human preferences remains an active research challenge. Previous approaches have primarily utilized Reinforcement Learning from Human Feedback (RLHF) via online RL methods such as Proximal Policy Optimization (PPO). Recently, offline methods such as Sequence Likelihood Calibration (SLiC) and Direct Preference Optimization (DPO) have emerged as attractive alternatives, offering improvements in stability and scalability while maintaining competitive performance. SLiC refines its loss function using sequence pairs sampled from a supervised fine-tuned (SFT) policy, while DPO directly optimizes language models based on preference data, foregoing the need for a separate reward model. However, the maximum likelihood estimator (MLE) of the target optimal policy requires labeled preference pairs sampled from that policy. DPO's lack of a reward model constrains its ability to sample preference pairs from the optimal policy, and SLiC is restricted to sampling preference pairs only from the SFT policy. To address these limitations, we introduce a novel approach called Statistical Rejection Sampling Optimization (RSO) that aims to source preference data from the target optimal policy using rejection sampling, enabling a more accurate estimation of the optimal policy. We also propose a unified framework that enhances the loss functions used in both SLiC and DPO from a preference modeling standpoint. Through extensive experiments across three diverse tasks, we demonstrate that RSO consistently outperforms both SLiC and DPO on evaluations from both Large Language Model (LLM) and human raters.
PDF140December 15, 2024