直接对齐算法中奖励模型过度优化的规模律
Scaling Laws for Reward Model Overoptimization in Direct Alignment Algorithms
June 5, 2024
作者: Rafael Rafailov, Yaswanth Chittepu, Ryan Park, Harshit Sikchi, Joey Hejna, Bradley Knox, Chelsea Finn, Scott Niekum
cs.AI
摘要
从人类反馈中进行强化学习(RLHF)对大型语言模型(LLMs)的最近成功至关重要,然而,这往往是一个复杂且脆弱的过程。在经典的RLHF框架中,首先训练一个奖励模型来表示人类偏好,然后在线强化学习(RL)算法利用该模型来优化LLM。这种方法的一个突出问题是奖励过度优化或奖励欺骗,即通过学习的代理奖励模型衡量的性能提高,但真实质量停滞甚至恶化。直接对齐算法(DDAs)如直接偏好优化已经成为经典RLHF流程的替代方案,通过绕过奖励建模阶段。然而,尽管DDAs不使用单独的代理奖励模型,它们仍常常因过度优化而恶化。虽然对于DDAs来说所谓的奖励欺骗现象并没有明确定义,但我们仍然发现类似的趋势:在更高的KL预算下,DAA算法表现出与经典RLHF对应物类似的退化模式。特别是,我们发现DAA方法不仅在广泛的KL预算范围内恶化,而且通常甚至在数据集完成一个时期之前就开始恶化。通过大量的实证实验,本文为DAAs制定和正式化了奖励过度优化或欺骗问题,并探讨了其在目标、训练制度和模型规模上的后果。
English
Reinforcement Learning from Human Feedback (RLHF) has been crucial to the
recent success of Large Language Models (LLMs), however, it is often a complex
and brittle process. In the classical RLHF framework, a reward model is first
trained to represent human preferences, which is in turn used by an online
reinforcement learning (RL) algorithm to optimize the LLM. A prominent issue
with such methods is reward over-optimization or reward hacking,
where performance as measured by the learned proxy reward model increases, but
true quality plateaus or even deteriorates. Direct Alignment Algorithms (DDAs)
like Direct Preference Optimization have emerged as alternatives to the
classical RLHF pipeline by circumventing the reward modeling phase. However,
although DAAs do not use a separate proxy reward model, they still commonly
deteriorate from over-optimization. While the so-called reward hacking
phenomenon is not well-defined for DAAs, we still uncover similar trends: at
higher KL budgets, DAA algorithms exhibit similar degradation patterns to their
classic RLHF counterparts. In particular, we find that DAA methods deteriorate
not only across a wide range of KL budgets but also often before even a single
epoch of the dataset is completed. Through extensive empirical experimentation,
this work formulates and formalizes the reward over-optimization or hacking
problem for DAAs and explores its consequences across objectives, training
regimes, and model scales.Summary
AI-Generated Summary