對比式偏好學習:從人類反饋學習,無需強化學習
Contrastive Prefence Learning: Learning from Human Feedback without RL
October 20, 2023
作者: Joey Hejna, Rafael Rafailov, Harshit Sikchi, Chelsea Finn, Scott Niekum, W. Bradley Knox, Dorsa Sadigh
cs.AI
摘要
從人類反饋中學習強化學習(RLHF)已成為將模型與人類意圖對齊的一種流行範式。通常,RLHF算法分為兩個階段:首先,利用人類偏好來學習獎勵函數,然後通過強化學習(RL)優化所學獎勵以對齊模型。這種範式假設人類偏好是根據獎勵分佈的,但最近的研究表明,它們實際上遵循用戶最優策略下的後悔。因此,從反饋中學習獎勵函數不僅基於人類偏好的錯誤假設,還會導致源於策略梯度或RL階段中的自並行問題的難以處理的優化挑戰。由於這些優化挑戰,當代RLHF方法將自己限制在上下文樂隊設置(例如,大型語言模型)或限制觀察維度(例如,基於狀態的機器人技術)。我們通過引入一系列新算法來克服這些限制,從而優化從人類反饋中學習行為,使用基於後悔的人類偏好模型。利用最大熵原則,我們推導出對比偏好學習(CPL),這是一種從偏好中學習最優策略的算法,而無需學習獎勵函數,從而避免了對RL的需求。CPL是完全離線的,僅使用簡單的對比目標,並且可以應用於任意MDP。這使CPL能夠優雅地擴展到高維度和序列化的RLHF問題,同時比先前的方法更簡單。
English
Reinforcement Learning from Human Feedback (RLHF) has emerged as a popular
paradigm for aligning models with human intent. Typically RLHF algorithms
operate in two phases: first, use human preferences to learn a reward function
and second, align the model by optimizing the learned reward via reinforcement
learning (RL). This paradigm assumes that human preferences are distributed
according to reward, but recent work suggests that they instead follow the
regret under the user's optimal policy. Thus, learning a reward function from
feedback is not only based on a flawed assumption of human preference, but also
leads to unwieldy optimization challenges that stem from policy gradients or
bootstrapping in the RL phase. Because of these optimization challenges,
contemporary RLHF methods restrict themselves to contextual bandit settings
(e.g., as in large language models) or limit observation dimensionality (e.g.,
state-based robotics). We overcome these limitations by introducing a new
family of algorithms for optimizing behavior from human feedback using the
regret-based model of human preferences. Using the principle of maximum
entropy, we derive Contrastive Preference Learning (CPL), an algorithm for
learning optimal policies from preferences without learning reward functions,
circumventing the need for RL. CPL is fully off-policy, uses only a simple
contrastive objective, and can be applied to arbitrary MDPs. This enables CPL
to elegantly scale to high-dimensional and sequential RLHF problems while being
simpler than prior methods.