通过合成反馈对齐大型语言模型
Aligning Large Language Models through Synthetic Feedback
May 23, 2023
作者: Sungdong Kim, Sanghwan Bae, Jamin Shin, Soyoung Kang, Donghyun Kwak, Kang Min Yoo, Minjoon Seo
cs.AI
摘要
将大型语言模型(LLMs)与人类价值观对齐变得日益重要,因为这使得可以对LLMs进行精细的引导,例如让它们遵循给定的指令同时减少其有害性。然而,这需要大量的人类示范和反馈。最近,一些开源模型尝试通过提炼已对齐的LLMs(如InstructGPT或ChatGPT)的数据来复制对齐学习过程。尽管这一过程减少了人类的努力,但构建这些数据集严重依赖于教师模型。在这项工作中,我们提出了一个新颖的框架,用几乎没有人力和不依赖于预对齐LLMs的方式进行对齐学习。首先,我们通过对比不同大小和提示的普通LLMs的响应,使用合成反馈进行奖励建模(RM)。然后,我们利用RM来模拟高质量示范,以训练一个监督策略,并通过强化学习进一步优化模型。我们的最终模型,具有合成训练数据集的对齐语言模型(ALMoST),胜过了包括Alpaca、Dolly和OpenAssistant在内的开源模型,这些模型是基于InstructGPT的输出或人工注释指令进行训练的。我们的规模为7B的模型在使用GPT-4作为评判者进行A/B测试时胜过了12-13B模型,平均获胜率约为75%。
English
Aligning large language models (LLMs) to human values has become increasingly
important as it enables sophisticated steering of LLMs, e.g., making them
follow given instructions while keeping them less toxic. However, it requires a
significant amount of human demonstrations and feedback. Recently, open-sourced
models have attempted to replicate the alignment learning process by distilling
data from already aligned LLMs like InstructGPT or ChatGPT. While this process
reduces human efforts, constructing these datasets has a heavy dependency on
the teacher models. In this work, we propose a novel framework for alignment
learning with almost no human labor and no dependency on pre-aligned LLMs.
First, we perform reward modeling (RM) with synthetic feedback by contrasting
responses from vanilla LLMs with various sizes and prompts. Then, we use the RM
for simulating high-quality demonstrations to train a supervised policy and for
further optimizing the model with reinforcement learning. Our resulting model,
Aligned Language Model with Synthetic Training dataset (ALMoST), outperforms
open-sourced models, including Alpaca, Dolly, and OpenAssistant, which are
trained on the outputs of InstructGPT or human-annotated instructions. Our
7B-sized model outperforms the 12-13B models in the A/B tests using GPT-4 as
the judge with about 75% winning rate on average.