《从方向到区域:基于局部几何的语言模型激活解构》
From Directions to Regions: Decomposing Activations in Language Models via Local Geometry
February 2, 2026
作者: Or Shafran, Shaked Ronen, Omri Fahn, Shauli Ravfogel, Atticus Geiger, Mor Geva
cs.AI
摘要
語言模型中的激勵分解方法與概念如何在激勵空間中實現的幾何假設密切相關。現有方法通常搜索單個全局方向,隱含地假設線性可分性,這種做法忽略了具有非線性或多維結構的概念。本研究採用因子分析器混合模型(MFA)作為可擴展的無監督替代方案,將激勵空間建模為具有局部協方差結構的高斯區域集合。MFA將激勵分解為兩個組合幾何對象:區域在激勵空間中的質心,以及相對於質心的局部變異。我們針對Llama-3.1-8B和Gemma-2-2B訓練大規模MFA模型,證明其能捕捉激勵空間中的複雜非線性結構。此外,在定位與導引基準測試中的評估表明,MFA不僅勝過無監督基線方法,在定位性能上可與監督式方法競爭,且其導引效果常優於稀疏自編碼器。這些發現共同表明,通過子空間呈現的局部幾何結構,能有效作為可擴展概念發現與模型控制的分析單元,並能捕捉孤立方向無法表徵的複雜結構。
English
Activation decomposition methods in language models are tightly coupled to geometric assumptions on how concepts are realized in activation space. Existing approaches search for individual global directions, implicitly assuming linear separability, which overlooks concepts with nonlinear or multi-dimensional structure. In this work, we leverage Mixture of Factor Analyzers (MFA) as a scalable, unsupervised alternative that models the activation space as a collection of Gaussian regions with their local covariance structure. MFA decomposes activations into two compositional geometric objects: the region's centroid in activation space, and the local variation from the centroid. We train large-scale MFAs for Llama-3.1-8B and Gemma-2-2B, and show they capture complex, nonlinear structures in activation space. Moreover, evaluations on localization and steering benchmarks show that MFA outperforms unsupervised baselines, is competitive with supervised localization methods, and often achieves stronger steering performance than sparse autoencoders. Together, our findings position local geometry, expressed through subspaces, as a promising unit of analysis for scalable concept discovery and model control, accounting for complex structures that isolated directions fail to capture.