ODIN:分離獎勵減輕 RLHF 中的破解
ODIN: Disentangled Reward Mitigates Hacking in RLHF
February 11, 2024
作者: Lichang Chen, Chen Zhu, Davit Soselia, Jiuhai Chen, Tianyi Zhou, Tom Goldstein, Heng Huang, Mohammad Shoeybi, Bryan Catanzaro
cs.AI
摘要
在這份研究中,我們探討了回應長度上的獎勵破解問題,這是在從人類反饋中的強化學習(RLHF)中出現的挑戰,在LLMs上。來自LLMs的格式良好、冗長但不太有幫助的回應往往可以欺騙LLMs甚至人類評估者以獲得高分。同樣的問題也存在於RL中的某些獎勵模型。為了應對訓練和評估中的挑戰,我們建立了一個更可靠的評估協議,用於比較不同的訓練配置,該協議檢查了通過變化訓練超參數獲得的LLM評估分數和回應長度之間的權衡。基於這種評估,我們進行了大規模研究,結果揭示了在減輕長度偏差方面在RL中使用的超參數和技巧的有效性。我們進一步提議通過共同訓練兩個線性頭部在共享特徵表示上來改進獎勵模型,以預測獎勵,其中一個頭部訓練以與長度相關,另一個頭部訓練以與長度不相關,因此更專注於實際內容。然後我們在RL中丟棄長度頭部以防止對長度的獎勵破解。實驗表明,我們的方法幾乎消除了獎勵與長度的相關性,並顯著改善了獲得的策略。
English
In this work, we study the issue of reward hacking on the response length, a
challenge emerging in Reinforcement Learning from Human Feedback (RLHF) on
LLMs. A well-formatted, verbose but less helpful response from the LLMs can
often deceive LLMs or even human evaluators to achieve high scores. The same
issue also holds for some reward models in RL. To address the challenges in
both training and evaluation, we establish a more reliable evaluation protocol
for comparing different training configurations, which inspects the trade-off
between LLM evaluation score and response length obtained by varying training
hyperparameters. Based on this evaluation, we conduct large-scale studies,
where the results shed insights into the efficacy of hyperparameters and tricks
used in RL on mitigating length bias. We further propose to improve the reward
model by jointly training two linear heads on shared feature representations to
predict the rewards, one trained to correlate with length, and the other
trained to decorrelate with length and therefore focus more on the actual
content. We then discard the length head in RL to prevent reward hacking on
length. Experiments demonstrate that our approach almost eliminates the reward
correlation with length, and improves the obtained policy by a significant
margin.