REMA:面向大型语言模型解释的统一推理流形框架
REMA: A Unified Reasoning Manifold Framework for Interpreting Large Language Model
September 26, 2025
作者: Bo Li, Guanzhi Deng, Ronghao Chen, Junrong Yue, Shuo Zhang, Qinghua Zhao, Linqi Song, Lijie Wen
cs.AI
摘要
理解大型语言模型(LLMs)如何执行复杂推理及其失效机制,是解释性研究中的一大挑战。为提供可量化的几何分析视角,我们定义了“推理流形”这一概念,即由所有正确推理生成对应的内部表征所形成的潜在低维几何结构。该结构可被视作模型为成功解决特定任务而习得的有效思维路径的具象化体现。基于此概念,我们构建了REMA框架,通过定量比较错误与正确推理样本对应的内部模型表征之间的空间关系,来解释失效的根源。具体而言,REMA首先通过计算每个错误表征与由正确表征近似形成的流形之间的k近邻距离,量化其几何偏差,从而提供统一的失效信号。随后,通过追踪该偏差度量在模型各层的变化,并与正确表征的内部波动基线进行对比,REMA定位了这些偏差首次显著出现的分歧点,从而识别出推理链开始偏离正轨的位置。我们在多种语言和多模态模型及任务上的广泛实验,验证了推理流形的低维特性以及错误与正确推理表征间的高度可分性。实验结果也证实了REMA框架在分析推理失效根源方面的有效性。本研究将抽象的推理失效与表征中可测量的几何偏差相联系,为深入理解和诊断黑箱模型的内部计算过程开辟了新途径。
English
Understanding how Large Language Models (LLMs) perform complex reasoning and
their failure mechanisms is a challenge in interpretability research. To
provide a measurable geometric analysis perspective, we define the concept of
the Reasoning Manifold, a latent low-dimensional geometric structure formed by
the internal representations corresponding to all correctly reasoned
generations. This structure can be conceptualized as the embodiment of the
effective thinking paths that the model has learned to successfully solve a
given task. Based on this concept, we build REMA, a framework that explains the
origins of failures by quantitatively comparing the spatial relationships of
internal model representations corresponding to both erroneous and correct
reasoning samples. Specifically, REMA first quantifies the geometric deviation
of each erroneous representation by calculating its k-nearest neighbors
distance to the approximated manifold formed by correct representations,
thereby providing a unified failure signal. It then localizes the divergence
points where these deviations first become significant by tracking this
deviation metric across the model's layers and comparing it against a baseline
of internal fluctuations from correct representations, thus identifying where
the reasoning chain begins to go off-track. Our extensive experiments on
diverse language and multimodal models and tasks demonstrate the
low-dimensional nature of the reasoning manifold and the high separability
between erroneous and correct reasoning representations. The results also
validate the effectiveness of the REMA framework in analyzing the origins of
reasoning failures. This research connects abstract reasoning failures to
measurable geometric deviations in representations, providing new avenues for
in-depth understanding and diagnosis of the internal computational processes of
black-box models.