ChatPaper.aiChatPaper

MM-Spatial:探索多模態大語言模型中的三維空間理解

MM-Spatial: Exploring 3D Spatial Understanding in Multimodal LLMs

March 17, 2025
作者: Erik Daxberger, Nina Wenzel, David Griffiths, Haiming Gang, Justin Lazarow, Gefen Kohavi, Kai Kang, Marcin Eichner, Yinfei Yang, Afshin Dehghan, Peter Grasch
cs.AI

摘要

多模态大语言模型(MLLMs)在二维视觉理解方面表现出色,但在三维空间推理能力上仍存在局限。在本研究中,我们利用大规模高质量的三维场景数据及开放集标注,引入了:1)一个新颖的监督微调数据集;2)一个专注于室内场景的新评估基准。我们的“万物立方视觉问答”(CA-VQA)数据涵盖了多样化的空间任务,包括空间关系预测、度量尺寸与距离估计以及三维定位。我们展示了CA-VQA能够帮助我们训练出MM-Spatial,这是一个强大的通用型MLLM,它不仅在包括我们自有的三维空间理解基准上达到了最先进的性能,还证明了通过融入度量深度和多视角输入(由CA-VQA提供)可以进一步提升三维理解能力。此外,我们证实仅凭数据,我们的模型就能实现与专用单目深度估计模型相媲美的深度感知能力。我们将公开我们的监督微调数据集和评估基准。
English
Multimodal large language models (MLLMs) excel at 2D visual understanding but remain limited in their ability to reason about 3D space. In this work, we leverage large-scale high-quality 3D scene data with open-set annotations to introduce 1) a novel supervised fine-tuning dataset and 2) a new evaluation benchmark, focused on indoor scenes. Our Cubify Anything VQA (CA-VQA) data covers diverse spatial tasks including spatial relationship prediction, metric size and distance estimation, and 3D grounding. We show that CA-VQA enables us to train MM-Spatial, a strong generalist MLLM that also achieves state-of-the-art performance on 3D spatial understanding benchmarks, including our own. We show how incorporating metric depth and multi-view inputs (provided in CA-VQA) can further improve 3D understanding, and demonstrate that data alone allows our model to achieve depth perception capabilities comparable to dedicated monocular depth estimation models. We will publish our SFT dataset and benchmark.

Summary

AI-Generated Summary

PDF72March 19, 2025