Think3D:运用空间思维进行空间推理
Think3D: Thinking with Space for Spatial Reasoning
January 19, 2026
作者: Zaibin Zhang, Yuhan Wu, Lianjie Jia, Yifan Wang, Zhongbo Zhang, Yijiang Li, Binghao Ran, Fuxi Zhang, Zhuohan Sun, Zhenfei Yin, Lijun Wang, Huchuan Lu
cs.AI
摘要
理解并推理物理世界需要空间智能:即超越二维感知、解读几何结构、透视关系与空间互动的能力。当前视觉大模型虽在视觉理解方面表现出色,但其本质仍是二维感知器,难以实现真正的三维推理。我们提出Think3D框架,使视觉大模型具备三维空间思考能力。该框架利用从图像或视频中恢复点云与相机姿态的三维重建模型,让智能体通过相机操作与第一人称/全局视角切换主动操控空间,将空间推理转化为交互式三维思维链过程。无需额外训练,Think3D即可显著提升GPT-4.1、Gemini 2.5 Pro等先进模型的空间推理性能,在BLINK多视角与MindCube任务上平均提升7.8%,在VSI-Bench上提升4.7%。研究还发现,对于难以自主探索空间的小模型,通过强化学习策略选择信息丰富的视角与操作可带来显著增益:使用强化学习后,工具辅助的收益从0.7%提升至6.8%。我们的研究表明,无需训练、工具增强的空间探索是实现多模态智能体更灵活、类人三维推理的有效路径,由此开辟了多模态智能的新维度。代码与权重已发布于https://github.com/zhangzaibin/spagent。
English
Understanding and reasoning about the physical world requires spatial intelligence: the ability to interpret geometry, perspective, and spatial relations beyond 2D perception. While recent vision large models (VLMs) excel at visual understanding, they remain fundamentally 2D perceivers and struggle with genuine 3D reasoning. We introduce Think3D, a framework that enables VLM agents to think with 3D space. By leveraging 3D reconstruction models that recover point clouds and camera poses from images or videos, Think3D allows the agent to actively manipulate space through camera-based operations and ego/global-view switching, transforming spatial reasoning into an interactive 3D chain-of-thought process. Without additional training, Think3D significantly improves the spatial reasoning performance of advanced models such as GPT-4.1 and Gemini 2.5 Pro, yielding average gains of +7.8% on BLINK Multi-view and MindCube, and +4.7% on VSI-Bench. We further show that smaller models, which struggle with spatial exploration, benefit significantly from a reinforcement learning policy that enables the model to select informative viewpoints and operations. With RL, the benefit from tool usage increases from +0.7% to +6.8%. Our findings demonstrate that training-free, tool-augmented spatial exploration is a viable path toward more flexible and human-like 3D reasoning in multimodal agents, establishing a new dimension of multimodal intelligence. Code and weights are released at https://github.com/zhangzaibin/spagent.