ChatPaper.aiChatPaper

在线AI反馈的直接语言模型对齐

Direct Language Model Alignment from Online AI Feedback

February 7, 2024
作者: Shangmin Guo, Biao Zhang, Tianlin Liu, Tianqi Liu, Misha Khalman, Felipe Llinares, Alexandre Rame, Thomas Mesnard, Yao Zhao, Bilal Piot, Johan Ferret, Mathieu Blondel
cs.AI

摘要

最近出现了直接偏好对齐(DAP)方法,如DPO,作为强化学习从人类反馈(RLHF)的高效替代方案,无需单独的奖励模型。然而,DAP方法中使用的偏好数据集通常在训练之前收集,并且从不更新,因此反馈纯粹是离线的。此外,这些数据集中的响应通常是从一个与正在对齐的语言模型不同的语言模型中抽样的,由于模型随着训练而演变,对齐阶段不可避免地是离策略的。在这项研究中,我们认为在线反馈是关键,并改进了DAP方法。我们的方法,在线人工智能反馈(OAIF),使用一个语言模型作为注释者:在每次训练迭代中,我们从当前模型中抽样两个响应,并提示LLM注释者选择哪个更受偏好,从而提供在线反馈。尽管简单,但我们通过在几个任务中进行人类评估表明,OAIF优于离线DAP和RLHF方法。我们进一步展示了在OAIF中利用的反馈是可以轻松控制的,通过向LLM注释者提供指令提示。
English
Direct alignment from preferences (DAP) methods, such as DPO, have recently emerged as efficient alternatives to reinforcement learning from human feedback (RLHF), that do not require a separate reward model. However, the preference datasets used in DAP methods are usually collected ahead of training and never updated, thus the feedback is purely offline. Moreover, responses in these datasets are often sampled from a language model distinct from the one being aligned, and since the model evolves over training, the alignment phase is inevitably off-policy. In this study, we posit that online feedback is key and improves DAP methods. Our method, online AI feedback (OAIF), uses an LLM as annotator: on each training iteration, we sample two responses from the current model and prompt the LLM annotator to choose which one is preferred, thus providing online feedback. Despite its simplicity, we demonstrate via human evaluation in several tasks that OAIF outperforms both offline DAP and RLHF methods. We further show that the feedback leveraged in OAIF is easily controllable, via instruction prompts to the LLM annotator.
PDF323December 15, 2024