推理的幾何學:表徵空間中的流動邏輯
The Geometry of Reasoning: Flowing Logics in Representation Space
October 10, 2025
作者: Yufa Zhou, Yixiao Wang, Xunjian Yin, Shuyan Zhou, Anru R. Zhang
cs.AI
摘要
我們研究大型語言模型(LLMs)如何通過其表示空間進行「思考」。我們提出了一種新穎的幾何框架,將LLM的推理建模為流動——嵌入軌跡在邏輯進行的過程中演變。我們通過使用相同的自然演繹命題但不同的語義載體,將邏輯結構與語義分離,從而測試LLMs是否在表層形式之外內化了邏輯。這一視角將推理與位置、速度和曲率等幾何量聯繫起來,使得在表示和概念空間中進行形式化分析成為可能。我們的理論確立了:(1) LLM推理對應於表示空間中的平滑流動,(2) 邏輯語句作為這些流動速度的局部控制器。利用學習到的表示代理,我們設計了控制實驗來可視化和量化推理流動,為我們的理論框架提供了實證驗證。我們的工作既作為研究推理現象的概念基礎,也提供了實用工具,為LLM行為的可解釋性和形式化分析提供了新的視角。
English
We study how large language models (LLMs) ``think'' through their
representation space. We propose a novel geometric framework that models an
LLM's reasoning as flows -- embedding trajectories evolving where logic goes.
We disentangle logical structure from semantics by employing the same natural
deduction propositions with varied semantic carriers, allowing us to test
whether LLMs internalize logic beyond surface form. This perspective connects
reasoning with geometric quantities such as position, velocity, and curvature,
enabling formal analysis in representation and concept spaces. Our theory
establishes: (1) LLM reasoning corresponds to smooth flows in representation
space, and (2) logical statements act as local controllers of these flows'
velocities. Using learned representation proxies, we design controlled
experiments to visualize and quantify reasoning flows, providing empirical
validation of our theoretical framework. Our work serves as both a conceptual
foundation and practical tools for studying reasoning phenomenon, offering a
new lens for interpretability and formal analysis of LLMs' behavior.