ChatPaper.aiChatPaper

利用直接原则反馈抑制粉红大象

Suppressing Pink Elephants with Direct Principle Feedback

February 12, 2024
作者: Louis Castricato, Nathan Lile, Suraj Anand, Hailey Schoelkopf, Siddharth Verma, Stella Biderman
cs.AI

摘要

现有的控制语言模型的方法,如RLHF和Constitutional AI,涉及确定哪些LLM行为是可取的,并将其训练到语言模型中。然而,在许多情况下,希望在推断时能够控制LLMs,以便它们可以在多种具有不同需求的情境中使用。我们通过粉色大象问题进行说明:指示LLM避免讨论某个实体(“粉色大象”),而是讨论一个首选实体(“灰色大象”)。我们应用了Constitutional AI的一种新颖简化,即直接原则反馈,跳过响应排名,直接在批评和修订上使用DPO。我们的结果表明,在我们的合成粉色大象数据集上进行DPF微调后,我们的13B微调的LLaMA 2模型在粉色大象问题评估的精心策划测试集上表现显著优于Llama-2-13B-Chat和一个提示基准,并且与GPT-4一样表现出色。
English
Existing methods for controlling language models, such as RLHF and Constitutional AI, involve determining which LLM behaviors are desirable and training them into a language model. However, in many cases, it is desirable for LLMs to be controllable at inference time, so that they can be used in multiple contexts with diverse needs. We illustrate this with the Pink Elephant Problem: instructing an LLM to avoid discussing a certain entity (a ``Pink Elephant''), and instead discuss a preferred entity (``Grey Elephant''). We apply a novel simplification of Constitutional AI, Direct Principle Feedback, which skips the ranking of responses and uses DPO directly on critiques and revisions. Our results show that after DPF fine-tuning on our synthetic Pink Elephants dataset, our 13B fine-tuned LLaMA 2 model significantly outperforms Llama-2-13B-Chat and a prompted baseline, and performs as well as GPT-4 in on our curated test set assessing the Pink Elephant Problem.
PDF111December 15, 2024