人类反馈的纳什学习
Nash Learning from Human Feedback
December 1, 2023
作者: Rémi Munos, Michal Valko, Daniele Calandriello, Mohammad Gheshlaghi Azar, Mark Rowland, Daniel Guo, Yunhao Tang, Matthieu Geist, Thomas Mésnard, Andrea Michi, Marco Selvi, Sertan Girgin, Nikola Momchev, Olivier Bachem, Daniel J. Mankowitz, Doina Precup, Bilal Piot
cs.AI
摘要
人类反馈强化学习(RLHF)已成为将大型语言模型(LLMs)与人类偏好对齐的主要范式。通常,RLHF 包括从人类反馈中学习奖励模型的初始步骤,通常表达为在预训练的LLM生成的文本对之间的偏好。随后,通过强化学习算法优化LLM的策略,以最大化奖励模型。然而,当前奖励模型的固有局限性在于无法充分表示人类偏好的丰富性以及其对采样分布的依赖。
在本研究中,我们介绍了一种使用人类成对反馈对LLMs进行微调的替代流程。我们的方法包括首先学习一个偏好模型,该模型在给定提示的情况下取决于两个输入,然后追求一种策略,该策略始终生成优于任何竞争策略生成的响应,从而定义了该偏好模型的纳什均衡。我们将这种方法称为人类反馈纳什学习(NLHF)。
在表格策略表示的背景下,我们提出了一种基于镜像下降原理的新颖算法解决方案,即Nash-MD。该算法生成一系列策略,最后一次迭代收敛到正则化的纳什均衡。此外,我们探讨了策略的参数化表示,并引入了用于深度学习架构的梯度下降算法。为了证明我们方法的有效性,我们提供了涉及LLM文本摘要任务微调的实验结果。我们相信NLHF为偏好学习和策略优化提供了一个引人注目的途径,有望推动将LLMs与人类偏好对齐的领域的发展。
English
Reinforcement learning from human feedback (RLHF) has emerged as the main
paradigm for aligning large language models (LLMs) with human preferences.
Typically, RLHF involves the initial step of learning a reward model from human
feedback, often expressed as preferences between pairs of text generations
produced by a pre-trained LLM. Subsequently, the LLM's policy is fine-tuned by
optimizing it to maximize the reward model through a reinforcement learning
algorithm. However, an inherent limitation of current reward models is their
inability to fully represent the richness of human preferences and their
dependency on the sampling distribution.
In this study, we introduce an alternative pipeline for the fine-tuning of
LLMs using pairwise human feedback. Our approach entails the initial learning
of a preference model, which is conditioned on two inputs given a prompt,
followed by the pursuit of a policy that consistently generates responses
preferred over those generated by any competing policy, thus defining the Nash
equilibrium of this preference model. We term this approach Nash learning from
human feedback (NLHF).
In the context of a tabular policy representation, we present a novel
algorithmic solution, Nash-MD, founded on the principles of mirror descent.
This algorithm produces a sequence of policies, with the last iteration
converging to the regularized Nash equilibrium. Additionally, we explore
parametric representations of policies and introduce gradient descent
algorithms for deep-learning architectures. To demonstrate the effectiveness of
our approach, we present experimental results involving the fine-tuning of a
LLM for a text summarization task. We believe NLHF offers a compelling avenue
for preference learning and policy optimization with the potential of advancing
the field of aligning LLMs with human preferences.