AtP*: 一種將LLM行為本地化到組件的高效且可擴展的方法
AtP*: An efficient and scalable method for localizing LLM behaviour to components
March 1, 2024
作者: János Kramár, Tom Lieberum, Rohin Shah, Neel Nanda
cs.AI
摘要
激活補丁(Activation Patching)是一種直接計算行為歸因於模型組件的方法。然而,要全面應用此方法需要進行一次掃描,其成本隨著模型組件數量線性增加,對於最先進的大型語言模型(LLMs)來說成本過高。我們研究了歸因補丁(Attribution Patching,AtP),這是一種快速基於梯度的激活補丁的近似方法,發現了兩類AtP失敗模式,導致顯著的偽陰性。我們提出了AtP的變體AtP*,通過兩個改變來解決這些失敗模式,同時保持可擴展性。我們首次系統研究了AtP以及更快的激活補丁的替代方法,並展示AtP明顯優於所有其他研究方法,AtP*提供進一步顯著的改進。最後,我們提供了一種方法來界定AtP*估計的偽陰性概率。
English
Activation Patching is a method of directly computing causal attributions of
behavior to model components. However, applying it exhaustively requires a
sweep with cost scaling linearly in the number of model components, which can
be prohibitively expensive for SoTA Large Language Models (LLMs). We
investigate Attribution Patching (AtP), a fast gradient-based approximation to
Activation Patching and find two classes of failure modes of AtP which lead to
significant false negatives. We propose a variant of AtP called AtP*, with two
changes to address these failure modes while retaining scalability. We present
the first systematic study of AtP and alternative methods for faster activation
patching and show that AtP significantly outperforms all other investigated
methods, with AtP* providing further significant improvement. Finally, we
provide a method to bound the probability of remaining false negatives of AtP*
estimates.