大规模可解释性:在羊驼中识别因果机制
Interpretability at Scale: Identifying Causal Mechanisms in Alpaca
May 15, 2023
作者: Zhengxuan Wu, Atticus Geiger, Christopher Potts, Noah D. Goodman
cs.AI
摘要
获得大型通用语言模型的人类可解释解释是人工智能安全的紧迫目标。然而,同样重要的是,我们的可解释性方法要忠实于潜在模型行为的因果动态,并能够稳健地推广到未见输入。分布式对齐搜索(DAS)是一种基于因果抽象理论的强大梯度下降方法,它揭示了可解释的符号算法与为特定任务微调的小型深度学习模型之间的完美对齐。在本文中,我们通过用学习参数取代剩余的蛮力搜索步骤显著扩展了DAS,这一方法被称为DAS。这使我们能够在大型语言模型中高效地搜索可解释的因果结构,同时它们遵循指令。我们将DAS应用于Alpaca模型(7B参数),该模型可以解决一个简单的数值推理问题。通过DAS,我们发现Alpaca通过实施一个具有两个可解释布尔变量的因果模型来完成这项任务。此外,我们发现神经表示与这些变量的对齐对输入和指令的变化具有稳健性。这些发现标志着深度理解我们最大型和最广泛部署的语言模型内部运作的第一步。
English
Obtaining human-interpretable explanations of large, general-purpose language
models is an urgent goal for AI safety. However, it is just as important that
our interpretability methods are faithful to the causal dynamics underlying
model behavior and able to robustly generalize to unseen inputs. Distributed
Alignment Search (DAS) is a powerful gradient descent method grounded in a
theory of causal abstraction that uncovered perfect alignments between
interpretable symbolic algorithms and small deep learning models fine-tuned for
specific tasks. In the present paper, we scale DAS significantly by replacing
the remaining brute-force search steps with learned parameters -- an approach
we call DAS. This enables us to efficiently search for interpretable causal
structure in large language models while they follow instructions. We apply DAS
to the Alpaca model (7B parameters), which, off the shelf, solves a simple
numerical reasoning problem. With DAS, we discover that Alpaca does this by
implementing a causal model with two interpretable boolean variables.
Furthermore, we find that the alignment of neural representations with these
variables is robust to changes in inputs and instructions. These findings mark
a first step toward deeply understanding the inner-workings of our largest and
most widely deployed language models.