ChatPaper.aiChatPaper

漫長的道路:探討在RLHF中的長度相關性

A Long Way to Go: Investigating Length Correlations in RLHF

October 5, 2023
作者: Prasann Singhal, Tanya Goyal, Jiacheng Xu, Greg Durrett
cs.AI

摘要

使用從人類反饋中學習的強化學習(RLHF)來對齊大型語言模型已經取得了巨大成功。開源偏好數據集和獎勵模型使得在通用聊天設置之外進行更廣泛的實驗成為可能,特別是為了使系統在網頁問答、摘要和多輪對話等任務中更加「有幫助」。在優化幫助性時,已經一致觀察到RLHF會驅使模型生成較長的輸出。本文證明了優化回應長度是RLHF在這些設置中報告的改進背後的一個重要因素。首先,我們研究了在三個開源偏好數據集上訓練的用於幫助性的獎勵模型的獎勵和長度之間的關係。在這裡,長度與獎勵強烈相關,獎勵分數的提升很大程度上是通過改變輸出長度的分佈來實現的。然後,我們探索了在RL和獎勵模型學習過程中的干預措施,以查看是否可以實現與RLHF相同的下游改進而不增加長度。雖然我們的干預措施減輕了長度增加,但在各種設置中並不是均勻有效的。此外,我們發現,即使僅基於長度的獎勵運行RLHF,也可以重現初始策略模型上的大部分下游改進,這表明在這些設置中的獎勵模型還有很長的路要走。
English
Great successes have been reported using Reinforcement Learning from Human Feedback (RLHF) to align large language models. Open-source preference datasets and reward models have enabled wider experimentation beyond generic chat settings, particularly to make systems more "helpful" for tasks like web question answering, summarization, and multi-turn dialogue. When optimizing for helpfulness, RLHF has been consistently observed to drive models to produce longer outputs. This paper demonstrates that optimizing for response length is a significant factor behind RLHF's reported improvements in these settings. First, we study the relationship between reward and length for reward models trained on three open-source preference datasets for helpfulness. Here, length correlates strongly with reward, and improvements in reward score are driven in large part by shifting the distribution over output lengths. We then explore interventions during both RL and reward model learning to see if we can achieve the same downstream improvements as RLHF without increasing length. While our interventions mitigate length increases, they aren't uniformly effective across settings. Furthermore, we find that even running RLHF with a reward based solely on length can reproduce most of the downstream improvements over the initial policy model, showing that reward models in these settings have a long way to go.
PDF101December 15, 2024