一个多模态自动可解释性代理
A Multimodal Automated Interpretability Agent
April 22, 2024
作者: Tamar Rott Shaham, Sarah Schwettmann, Franklin Wang, Achyuta Rajaram, Evan Hernandez, Jacob Andreas, Antonio Torralba
cs.AI
摘要
本文描述了MAIA,一种多模态自动可解释性代理。MAIA是一个系统,利用神经模型来自动化神经模型理解任务,如特征解释和故障模式发现。它为一个预训练的视觉-语言模型配备了一组工具,支持对其他模型的子组件进行迭代实验,以解释它们的行为。这些工具包括人类可解释性研究人员常用的工具:用于合成和编辑输入、从真实世界数据集中计算最大激活示例、以及总结和描述实验结果。MAIA提出的可解释性实验将这些工具组合起来描述和解释系统行为。我们评估了MAIA在计算机视觉模型中的应用。我们首先表征了MAIA描述(神经元级)图像学习表示中特征的能力。在多个训练模型和一个具有配对地面真实描述的新颖合成视觉神经元数据集上,MAIA生成的描述与专业人类实验者生成的描述相当。然后,我们展示了MAIA可以帮助完成另外两个可解释性任务:减少对虚假特征的敏感性,以及自动识别可能被错误分类的输入。
English
This paper describes MAIA, a Multimodal Automated Interpretability Agent.
MAIA is a system that uses neural models to automate neural model understanding
tasks like feature interpretation and failure mode discovery. It equips a
pre-trained vision-language model with a set of tools that support iterative
experimentation on subcomponents of other models to explain their behavior.
These include tools commonly used by human interpretability researchers: for
synthesizing and editing inputs, computing maximally activating exemplars from
real-world datasets, and summarizing and describing experimental results.
Interpretability experiments proposed by MAIA compose these tools to describe
and explain system behavior. We evaluate applications of MAIA to computer
vision models. We first characterize MAIA's ability to describe (neuron-level)
features in learned representations of images. Across several trained models
and a novel dataset of synthetic vision neurons with paired ground-truth
descriptions, MAIA produces descriptions comparable to those generated by
expert human experimenters. We then show that MAIA can aid in two additional
interpretability tasks: reducing sensitivity to spurious features, and
automatically identifying inputs likely to be mis-classified.Summary
AI-Generated Summary