ChatPaper.aiChatPaper

漫长的道路:探究RLHF中的长度相关性

A Long Way to Go: Investigating Length Correlations in RLHF

October 5, 2023
作者: Prasann Singhal, Tanya Goyal, Jiacheng Xu, Greg Durrett
cs.AI

摘要

通过使用人类反馈的强化学习(RLHF)来对齐大型语言模型取得了巨大成功。开源偏好数据集和奖励模型使得在通用聊天设置之外进行更广泛的实验成为可能,特别是为了使系统在诸如网络问答、摘要和多轮对话等任务中更具“帮助性”。在优化帮助性时,已经一致观察到RLHF会驱使模型生成更长的输出。本文证明了优化回复长度是RLHF在这些设置中报告的改进背后的一个重要因素。首先,我们研究了在三个开源偏好数据集上训练的用于帮助性的奖励模型的奖励和长度之间的关系。在这里,长度与奖励强烈相关,奖励分数的提高很大程度上是通过改变输出长度的分布来实现的。然后,我们探讨了在RL和奖励模型学习过程中的干预措施,以查看是否可以在不增加长度的情况下实现与RLHF相同的下游改进。虽然我们的干预措施可以减少长度的增加,但在不同设置中的效果并不一致。此外,我们发现即使仅基于长度运行RLHF的奖励也能再现初始策略模型上的大部分下游改进,这表明在这些设置中的奖励模型还有很长的路要走。
English
Great successes have been reported using Reinforcement Learning from Human Feedback (RLHF) to align large language models. Open-source preference datasets and reward models have enabled wider experimentation beyond generic chat settings, particularly to make systems more "helpful" for tasks like web question answering, summarization, and multi-turn dialogue. When optimizing for helpfulness, RLHF has been consistently observed to drive models to produce longer outputs. This paper demonstrates that optimizing for response length is a significant factor behind RLHF's reported improvements in these settings. First, we study the relationship between reward and length for reward models trained on three open-source preference datasets for helpfulness. Here, length correlates strongly with reward, and improvements in reward score are driven in large part by shifting the distribution over output lengths. We then explore interventions during both RL and reward model learning to see if we can achieve the same downstream improvements as RLHF without increasing length. While our interventions mitigate length increases, they aren't uniformly effective across settings. Furthermore, we find that even running RLHF with a reward based solely on length can reproduce most of the downstream improvements over the initial policy model, showing that reward models in these settings have a long way to go.
PDF101December 15, 2024