ChatPaper.aiChatPaper

自我对弈偏好优化用于语言模型对齐

Self-Play Preference Optimization for Language Model Alignment

May 1, 2024
作者: Yue Wu, Zhiqing Sun, Huizhuo Yuan, Kaixuan Ji, Yiming Yang, Quanquan Gu
cs.AI

摘要

传统的来自人类反馈的强化学习(RLHF)方法依赖于像Bradley-Terry模型这样的参数模型,无法捕捉人类偏好中的不传递性和非理性。最近的进展表明,直接处理偏好概率可以更准确地反映人类偏好,从而实现更灵活和准确的语言模型对齐。在本文中,我们提出了一种基于自我对弈的语言模型对齐方法,将问题视为一个旨在确定纳什均衡策略的常和二人博弈。我们的方法被称为自我对弈偏好优化(SPPO),通过迭代策略更新来近似纳什均衡,并享有理论上的收敛保证。我们的方法可以有效地增加所选响应的对数似然,减少被拒绝响应的对数似然,这是对称成对损失(如直接偏好优化(DPO)和身份偏好优化(IPO))无法轻松实现的。在我们的实验中,仅使用来自UltraFeedback数据集的60k个提示(不包括响应)且没有任何提示增强,通过利用仅具有0.4B参数的预训练偏好模型PairRM,SPPO可以从微调Mistral-7B-Instruct-v0.2中获得一个在AlpacaEval 2.0上对抗GPT-4-Turbo的最新长度受控胜率达到28.53%的模型。它还在MT-Bench和Open LLM排行榜上胜过(迭代的)DPO和IPO。值得注意的是,SPPO的强大性能是在没有来自GPT-4或其他更强大语言模型的额外外部监督(例如响应、偏好等)的情况下实现的。
English
Traditional reinforcement learning from human feedback (RLHF) approaches relying on parametric models like the Bradley-Terry model fall short in capturing the intransitivity and irrationality in human preferences. Recent advancements suggest that directly working with preference probabilities can yield a more accurate reflection of human preferences, enabling more flexible and accurate language model alignment. In this paper, we propose a self-play-based method for language model alignment, which treats the problem as a constant-sum two-player game aimed at identifying the Nash equilibrium policy. Our approach, dubbed Self-Play Preference Optimization (SPPO), approximates the Nash equilibrium through iterative policy updates and enjoys theoretical convergence guarantee. Our method can effectively increase the log-likelihood of the chosen response and decrease that of the rejected response, which cannot be trivially achieved by symmetric pairwise loss such as Direct Preference Optimization (DPO) and Identity Preference Optimization (IPO). In our experiments, using only 60k prompts (without responses) from the UltraFeedback dataset and without any prompt augmentation, by leveraging a pre-trained preference model PairRM with only 0.4B parameters, SPPO can obtain a model from fine-tuning Mistral-7B-Instruct-v0.2 that achieves the state-of-the-art length-controlled win-rate of 28.53% against GPT-4-Turbo on AlpacaEval 2.0. It also outperforms the (iterative) DPO and IPO on MT-Bench and the Open LLM Leaderboard. Notably, the strong performance of SPPO is achieved without additional external supervision (e.g., responses, preferences, etc.) from GPT-4 or other stronger language models.

Summary

AI-Generated Summary

PDF287December 15, 2024