通過合成反饋來對齊大型語言模型
Aligning Large Language Models through Synthetic Feedback
May 23, 2023
作者: Sungdong Kim, Sanghwan Bae, Jamin Shin, Soyoung Kang, Donghyun Kwak, Kang Min Yoo, Minjoon Seo
cs.AI
摘要
將大型語言模型(LLMs)與人類價值觀調整對齊已變得日益重要,因為這使得對LLMs進行精細的引導成為可能,例如,讓它們遵循給定的指示,同時保持其毒性較低。然而,這需要大量的人類示範和反饋。最近,開源模型已嘗試通過提煉來自已對齊的LLMs(如InstructGPT或ChatGPT)的數據來複製對齊學習過程。儘管這一過程減少了人類的努力,但構建這些數據集對教師模型有很大的依賴性。在這項工作中,我們提出了一個新穎的框架,用於幾乎不需要人力和不依賴預先對齊的LLMs進行對齊學習。首先,我們通過對比具有不同大小和提示的原始LLMs的回應,使用合成反饋進行獎勵建模(RM)。然後,我們使用RM來模擬高質量的示範,以訓練監督策略,並進一步使用強化學習來優化模型。我們的結果模型,名為具有合成訓練數據集(ALMoST)的對齊語言模型,優於Alpaca、Dolly和OpenAssistant等開源模型,這些模型是基於InstructGPT或人類註釋指示的輸出進行訓練的。我們的7B規模模型在使用GPT-4作為評判者進行A/B測試時,勝率平均約為75%,優於12-13B模型。
English
Aligning large language models (LLMs) to human values has become increasingly
important as it enables sophisticated steering of LLMs, e.g., making them
follow given instructions while keeping them less toxic. However, it requires a
significant amount of human demonstrations and feedback. Recently, open-sourced
models have attempted to replicate the alignment learning process by distilling
data from already aligned LLMs like InstructGPT or ChatGPT. While this process
reduces human efforts, constructing these datasets has a heavy dependency on
the teacher models. In this work, we propose a novel framework for alignment
learning with almost no human labor and no dependency on pre-aligned LLMs.
First, we perform reward modeling (RM) with synthetic feedback by contrasting
responses from vanilla LLMs with various sizes and prompts. Then, we use the RM
for simulating high-quality demonstrations to train a supervised policy and for
further optimizing the model with reinforcement learning. Our resulting model,
Aligned Language Model with Synthetic Training dataset (ALMoST), outperforms
open-sourced models, including Alpaca, Dolly, and OpenAssistant, which are
trained on the outputs of InstructGPT or human-annotated instructions. Our
7B-sized model outperforms the 12-13B models in the A/B tests using GPT-4 as
the judge with about 75% winning rate on average.