对比偏好学习:从人类反馈中学习,无需强化学习

Contrastive Prefence Learning: Learning from Human Feedback without RL

October 20, 2023
作者: Joey Hejna, Rafael Rafailov, Harshit Sikchi, Chelsea Finn, Scott Niekum, W. Bradley Knox, Dorsa Sadigh
cs.AI

摘要

人类反馈强化学习(RLHF)已成为一种流行的范式,用于使模型与人类意图保持一致。通常,RLHF算法分为两个阶段:首先,利用人类偏好来学习奖励函数;其次,通过强化学习(RL)优化学习到的奖励以对齐模型。这种范式假设人类偏好是根据奖励分布的,但最近的研究表明,它们实际上是根据用户最优策略下的后悔而不是奖励。因此,从反馈中学习奖励函数不仅基于对人类偏好的错误假设,还会导致在RL阶段中由策略梯度或引导引起的棘手的优化挑战。由于这些优化挑战,当代RLHF方法将自己限制在上下文匹配设置(例如大型语言模型)或限制观测维度(例如基于状态的机器人技术)。我们通过引入一系列新算法来克服这些限制,用于使用基于后悔的人类偏好模型优化行为。利用最大熵原理,我们推导出对比偏好学习(CPL)算法,用于从偏好中学习最优策略,而无需学习奖励函数,从而避免了对RL的需求。CPL完全是离线策略,仅使用简单的对比目标,并可应用于任意MDP。这使CPL能够优雅地扩展到高维和序贯RLHF问题,同时比先前的方法更简单。
English
Reinforcement Learning from Human Feedback (RLHF) has emerged as a popular paradigm for aligning models with human intent. Typically RLHF algorithms operate in two phases: first, use human preferences to learn a reward function and second, align the model by optimizing the learned reward via reinforcement learning (RL). This paradigm assumes that human preferences are distributed according to reward, but recent work suggests that they instead follow the regret under the user's optimal policy. Thus, learning a reward function from feedback is not only based on a flawed assumption of human preference, but also leads to unwieldy optimization challenges that stem from policy gradients or bootstrapping in the RL phase. Because of these optimization challenges, contemporary RLHF methods restrict themselves to contextual bandit settings (e.g., as in large language models) or limit observation dimensionality (e.g., state-based robotics). We overcome these limitations by introducing a new family of algorithms for optimizing behavior from human feedback using the regret-based model of human preferences. Using the principle of maximum entropy, we derive Contrastive Preference Learning (CPL), an algorithm for learning optimal policies from preferences without learning reward functions, circumventing the need for RL. CPL is fully off-policy, uses only a simple contrastive objective, and can be applied to arbitrary MDPs. This enables CPL to elegantly scale to high-dimensional and sequential RLHF problems while being simpler than prior methods.
PDF252December 15, 2024