REMA:一個統一的推理流形框架,用於解讀大型語言模型
REMA: A Unified Reasoning Manifold Framework for Interpreting Large Language Model
September 26, 2025
作者: Bo Li, Guanzhi Deng, Ronghao Chen, Junrong Yue, Shuo Zhang, Qinghua Zhao, Linqi Song, Lijie Wen
cs.AI
摘要
理解大型語言模型(LLMs)如何執行複雜推理及其失敗機制,是解釋性研究中的一大挑戰。為提供可測量的幾何分析視角,我們定義了「推理流形」這一概念,這是由所有正確推理生成對應的內部表徵所形成的潛在低維幾何結構。此結構可被視為模型已學會成功解決特定任務的有效思維路徑的體現。基於這一概念,我們構建了REMA框架,該框架通過定量比較錯誤與正確推理樣本對應的內部模型表徵的空間關係,來解釋失敗的根源。具體而言,REMA首先通過計算每個錯誤表徵與由正確表徵近似形成的流形之間的k近鄰距離,量化其幾何偏差,從而提供統一的失敗信號。隨後,它通過在模型各層追蹤這一偏差指標,並將其與正確表徵的內部波動基線進行比較,定位這些偏差首次顯著的發散點,從而識別推理鏈開始偏離軌跡的位置。我們在多樣化的語言及多模態模型與任務上的廣泛實驗,證明了推理流形的低維特性以及錯誤與正確推理表徵之間的高度可分離性。結果也驗證了REMA框架在分析推理失敗起源方面的有效性。這項研究將抽象的推理失敗與表徵中的可測量幾何偏差相聯繫,為深入理解與診斷黑箱模型的內部計算過程提供了新途徑。
English
Understanding how Large Language Models (LLMs) perform complex reasoning and
their failure mechanisms is a challenge in interpretability research. To
provide a measurable geometric analysis perspective, we define the concept of
the Reasoning Manifold, a latent low-dimensional geometric structure formed by
the internal representations corresponding to all correctly reasoned
generations. This structure can be conceptualized as the embodiment of the
effective thinking paths that the model has learned to successfully solve a
given task. Based on this concept, we build REMA, a framework that explains the
origins of failures by quantitatively comparing the spatial relationships of
internal model representations corresponding to both erroneous and correct
reasoning samples. Specifically, REMA first quantifies the geometric deviation
of each erroneous representation by calculating its k-nearest neighbors
distance to the approximated manifold formed by correct representations,
thereby providing a unified failure signal. It then localizes the divergence
points where these deviations first become significant by tracking this
deviation metric across the model's layers and comparing it against a baseline
of internal fluctuations from correct representations, thus identifying where
the reasoning chain begins to go off-track. Our extensive experiments on
diverse language and multimodal models and tasks demonstrate the
low-dimensional nature of the reasoning manifold and the high separability
between erroneous and correct reasoning representations. The results also
validate the effectiveness of the REMA framework in analyzing the origins of
reasoning failures. This research connects abstract reasoning failures to
measurable geometric deviations in representations, providing new avenues for
in-depth understanding and diagnosis of the internal computational processes of
black-box models.