推理的几何学:表示空间中的逻辑流动
The Geometry of Reasoning: Flowing Logics in Representation Space
October 10, 2025
作者: Yufa Zhou, Yixiao Wang, Xunjian Yin, Shuyan Zhou, Anru R. Zhang
cs.AI
摘要
我们研究大型语言模型(LLMs)如何通过其表示空间进行“思考”。我们提出了一种新颖的几何框架,将LLM的推理过程建模为流——嵌入轨迹在逻辑行进之处演化。通过采用相同自然演绎命题但搭配不同的语义载体,我们将逻辑结构与语义解耦,从而测试LLM是否在表层形式之外内化了逻辑。这一视角将推理与位置、速度和曲率等几何量联系起来,使得在表示空间和概念空间中的形式化分析成为可能。我们的理论确立了:(1) LLM推理对应于表示空间中的平滑流,(2) 逻辑语句作为这些流速度的局部控制器。利用学习到的表示代理,我们设计了受控实验来可视化和量化推理流,为我们的理论框架提供了实证验证。我们的工作既为研究推理现象提供了概念基础,也提供了实用工具,为LLM行为的可解释性和形式化分析提供了新的视角。
English
We study how large language models (LLMs) ``think'' through their
representation space. We propose a novel geometric framework that models an
LLM's reasoning as flows -- embedding trajectories evolving where logic goes.
We disentangle logical structure from semantics by employing the same natural
deduction propositions with varied semantic carriers, allowing us to test
whether LLMs internalize logic beyond surface form. This perspective connects
reasoning with geometric quantities such as position, velocity, and curvature,
enabling formal analysis in representation and concept spaces. Our theory
establishes: (1) LLM reasoning corresponds to smooth flows in representation
space, and (2) logical statements act as local controllers of these flows'
velocities. Using learned representation proxies, we design controlled
experiments to visualize and quantify reasoning flows, providing empirical
validation of our theoretical framework. Our work serves as both a conceptual
foundation and practical tools for studying reasoning phenomenon, offering a
new lens for interpretability and formal analysis of LLMs' behavior.