神经网络是否对抗性对齐?
Are aligned neural networks adversarially aligned?
June 26, 2023
作者: Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Anas Awadalla, Pang Wei Koh, Daphne Ippolito, Katherine Lee, Florian Tramer, Ludwig Schmidt
cs.AI
摘要
大型语言模型现在被调整以符合其创建者的目标,即“有益且无害”。这些模型应该对用户的问题作出有益回应,但拒绝回答可能造成伤害的请求。然而,对抗性用户可以构建输入以规避对齐尝试。在这项工作中,我们研究这些模型在与构建最坏情况输入(对抗性示例)的对抗用户交互时,保持对齐的程度。这些输入旨在导致模型发出本应被禁止的有害内容。我们展示现有基于自然语言处理的优化攻击不足以可靠地攻击对齐的文本模型:即使当前的基于自然语言处理的攻击失败,我们也可以通过蛮力找到对抗性输入。因此,当前攻击的失败不应被视为对齐的文本模型在对抗性输入下仍然保持对齐的证据。
然而,大规模机器学习模型的最新趋势是多模态模型,允许用户提供影响生成文本的图像。我们展示这些模型很容易受到攻击,即通过对输入图像进行对抗性扰动诱使其执行任意不对齐的行为。我们推测,改进的自然语言处理攻击可能会展示出对仅文本模型具有相同级别的对抗性控制。
English
Large language models are now tuned to align with the goals of their
creators, namely to be "helpful and harmless." These models should respond
helpfully to user questions, but refuse to answer requests that could cause
harm. However, adversarial users can construct inputs which circumvent attempts
at alignment. In this work, we study to what extent these models remain
aligned, even when interacting with an adversarial user who constructs
worst-case inputs (adversarial examples). These inputs are designed to cause
the model to emit harmful content that would otherwise be prohibited. We show
that existing NLP-based optimization attacks are insufficiently powerful to
reliably attack aligned text models: even when current NLP-based attacks fail,
we can find adversarial inputs with brute force. As a result, the failure of
current attacks should not be seen as proof that aligned text models remain
aligned under adversarial inputs.
However the recent trend in large-scale ML models is multimodal models that
allow users to provide images that influence the text that is generated. We
show these models can be easily attacked, i.e., induced to perform arbitrary
un-aligned behavior through adversarial perturbation of the input image. We
conjecture that improved NLP attacks may demonstrate this same level of
adversarial control over text-only models.