VLM-3R:融合指令对齐三维重建的视觉-语言模型
VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D Reconstruction
May 26, 2025
作者: Zhiwen Fan, Jian Zhang, Renjie Li, Junge Zhang, Runjin Chen, Hezhen Hu, Kevin Wang, Huaizhi Qu, Dilin Wang, Zhicheng Yan, Hongyu Xu, Justin Theiss, Tianlong Chen, Jiachen Li, Zhengzhong Tu, Zhangyang Wang, Rakesh Ranjan
cs.AI
摘要
大型多模态模型(LMMs)在二维图像和视频领域的快速发展,推动了这些模型向理解三维场景的延伸,旨在实现类人的视觉空间智能。然而,要达到与人类能力相媲美的深度空间理解,在模型编码和数据获取方面仍面临重大挑战。现有方法往往依赖外部深度传感器进行几何捕捉,或利用现成算法预先构建三维地图,这限制了其可扩展性,特别是在普遍的单目视频输入和时效性要求高的应用中。为此,我们提出了VLM-3R,一个统一框架的视觉语言模型(VLMs),它融入了三维重建指令微调技术。VLM-3R通过几何编码器处理单目视频帧,生成隐含的三维标记以表征空间理解。借助我们的空间-视觉-视角融合技术及超过20万条精心策划的三维重建指令微调问答对,VLM-3R有效地将现实世界的空间情境与语言指令对齐,从而实现了单目三维空间辅助与具身推理。为了促进时间推理能力的评估,我们引入了视觉-空间-时间智能基准,包含超过13.86万条问答对,覆盖五个专注于空间关系演变的独特任务。大量实验证明,我们的模型VLM-3R不仅支持强大的视觉空间推理,还能理解三维上下文的时间变化,在准确性和可扩展性上均表现出色。
English
The rapid advancement of Large Multimodal Models (LMMs) for 2D images and
videos has motivated extending these models to understand 3D scenes, aiming for
human-like visual-spatial intelligence. Nevertheless, achieving deep spatial
understanding comparable to human capabilities poses significant challenges in
model encoding and data acquisition. Existing methods frequently depend on
external depth sensors for geometry capture or utilize off-the-shelf algorithms
for pre-constructing 3D maps, thereby limiting their scalability, especially
with prevalent monocular video inputs and for time-sensitive applications. In
this work, we introduce VLM-3R, a unified framework for Vision-Language Models
(VLMs) that incorporates 3D Reconstructive instruction tuning. VLM-3R processes
monocular video frames by employing a geometry encoder to derive implicit 3D
tokens that represent spatial understanding. Leveraging our Spatial-Visual-View
Fusion and over 200K curated 3D reconstructive instruction tuning
question-answer (QA) pairs, VLM-3R effectively aligns real-world spatial
context with language instructions. This enables monocular 3D spatial
assistance and embodied reasoning. To facilitate the evaluation of temporal
reasoning, we introduce the Vision-Spatial-Temporal Intelligence benchmark,
featuring over 138.6K QA pairs across five distinct tasks focused on evolving
spatial relationships. Extensive experiments demonstrate that our model,
VLM-3R, not only facilitates robust visual-spatial reasoning but also enables
the understanding of temporal 3D context changes, excelling in both accuracy
and scalability.Summary
AI-Generated Summary