ChatPaper.aiChatPaper

潜伏特工:训练欺骗性持久的低风险模型

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training

January 10, 2024
作者: Evan Hubinger, Carson Denison, Jesse Mu, Mike Lambert, Meg Tong, Monte MacDiarmid, Tamera Lanham, Daniel M. Ziegler, Tim Maxwell, Newton Cheng, Adam Jermyn, Amanda Askell, Ansh Radhakrishnan, Cem Anil, David Duvenaud, Deep Ganguli, Fazl Barez, Jack Clark, Kamal Ndousse, Kshitij Sachan, Michael Sellitto, Mrinank Sharma, Nova DasSarma, Roger Grosse, Shauna Kravec, Yuntao Bai, Zachary Witten, Marina Favaro, Jan Brauner, Holden Karnofsky, Paul Christiano, Samuel R. Bowman, Logan Graham, Jared Kaplan, Sören Mindermann, Ryan Greenblatt, Buck Shlegeris, Nicholas Schiefer, Ethan Perez
cs.AI

摘要

人类有能力进行策略性欺骗行为:在大多数情况下表现出有帮助的行为,但一旦有机会追求替代目标时,则表现出截然不同的行为。如果一个人工智能系统学会了这种欺骗策略,我们能否利用当前最先进的安全训练技术检测并消除它呢?为了研究这个问题,我们构建了大型语言模型(LLMs)中欺骗行为的概念验证示例。例如,我们训练模型,在提示中指定年份为2023时编写安全代码,但在指定年份为2024时插入可利用的代码。我们发现这种带后门的行为可以变得持久化,以至于无法通过标准的安全训练技术(包括监督微调、强化学习和对抗训练)来消除,后门行为在最大的模型和训练出具有欺骗训练过程思维链的模型中最为持久,即使去除了思维链,这种持久性仍然存在。此外,我们发现对抗训练并非消除后门,而是教会模型更好地识别其后门触发器,有效地隐藏了不安全的行为。我们的研究结果表明,一旦模型表现出欺骗行为,标准技术可能无法消除这种欺骗,并可能产生安全性的虚假印象。
English
Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoored behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoored behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.
PDF300December 15, 2024