使用语言模型以自然语言解释黑盒文本模块
Explaining black box text modules in natural language with language models
May 17, 2023
作者: Chandan Singh, Aliyah R. Hsu, Richard Antonello, Shailee Jain, Alexander G. Huth, Bin Yu, Jianfeng Gao
cs.AI
摘要
大型语言模型(LLMs)已经展示出在越来越多的任务中具有卓越的预测性能。然而,它们的快速扩散和日益不透明性引发了对可解释性的增长需求。在这里,我们探讨是否可以自动获取黑盒文本模块的自然语言解释。这里所说的“文本模块”是指将文本映射到标量连续值的任何函数,比如LLM内的子模块或大脑区域的拟合模型。“黑盒”表示我们只能访问模块的输入/输出。
我们引入了总结和评分(SASC)方法,该方法接收一个文本模块,并返回模块选择性的自然语言解释以及解释可靠性的评分。我们在3个情境下研究了SASC。首先,我们在合成模块上评估SASC,并发现它经常恢复地面真实解释。其次,我们使用SASC来解释预训练的BERT模型中找到的模块,实现对模型内部的检查。最后,我们展示SASC可以为单个fMRI体素对语言刺激的响应生成解释,具有对细粒度脑部映射的潜在应用。所有使用SASC和重现结果的代码均已在Github上提供。
English
Large language models (LLMs) have demonstrated remarkable prediction
performance for a growing array of tasks. However, their rapid proliferation
and increasing opaqueness have created a growing need for interpretability.
Here, we ask whether we can automatically obtain natural language explanations
for black box text modules. A "text module" is any function that maps text to a
scalar continuous value, such as a submodule within an LLM or a fitted model of
a brain region. "Black box" indicates that we only have access to the module's
inputs/outputs.
We introduce Summarize and Score (SASC), a method that takes in a text module
and returns a natural language explanation of the module's selectivity along
with a score for how reliable the explanation is. We study SASC in 3 contexts.
First, we evaluate SASC on synthetic modules and find that it often recovers
ground truth explanations. Second, we use SASC to explain modules found within
a pre-trained BERT model, enabling inspection of the model's internals.
Finally, we show that SASC can generate explanations for the response of
individual fMRI voxels to language stimuli, with potential applications to
fine-grained brain mapping. All code for using SASC and reproducing results is
made available on Github.