ChatPaper.aiChatPaper

MetaSpatial:強化視覺語言模型在元宇宙中的三維空間推理能力

MetaSpatial: Reinforcing 3D Spatial Reasoning in VLMs for the Metaverse

March 24, 2025
作者: Zhenyu Pan, Han Liu
cs.AI

摘要

我們提出了MetaSpatial,這是首個基於強化學習(RL)的框架,旨在增強視覺語言模型(VLMs)的三維空間推理能力,實現無需硬編碼優化的實時三維場景生成。MetaSpatial解決了兩個核心挑戰:(i) VLMs內部缺乏三維空間推理能力,這限制了它們生成逼真佈局的能力;以及(ii) 傳統監督微調(SFT)在佈局生成任務中的低效性,因為完美的地面真實註釋不可得。我們的關鍵創新是一個多輪基於RL的優化機制,該機制整合了物理感知約束和渲染圖像評估,確保生成的三維佈局連貫、物理上合理且美學上一致。在方法論上,MetaSpatial引入了一種自適應的迭代推理過程,在此過程中,VLM通過分析渲染輸出,在多輪中逐步改進空間安排,從而提升場景的連貫性。實證評估表明,MetaSpatial顯著提高了各種規模模型的空間一致性和格式穩定性。訓練後,物體放置更加真實、對齊且功能上連貫,驗證了RL在元宇宙、AR/VR、數字孿生和遊戲開發應用中進行三維空間推理的有效性。我們的代碼、數據和訓練管道已公開於https://github.com/PzySeere/MetaSpatial。
English
We present MetaSpatial, the first reinforcement learning (RL)-based framework designed to enhance 3D spatial reasoning in vision-language models (VLMs), enabling real-time 3D scene generation without the need for hard-coded optimizations. MetaSpatial addresses two core challenges: (i) the lack of internalized 3D spatial reasoning in VLMs, which limits their ability to generate realistic layouts, and (ii) the inefficiency of traditional supervised fine-tuning (SFT) for layout generation tasks, as perfect ground truth annotations are unavailable. Our key innovation is a multi-turn RL-based optimization mechanism that integrates physics-aware constraints and rendered image evaluations, ensuring generated 3D layouts are coherent, physically plausible, and aesthetically consistent. Methodologically, MetaSpatial introduces an adaptive, iterative reasoning process, where the VLM refines spatial arrangements over multiple turns by analyzing rendered outputs, improving scene coherence progressively. Empirical evaluations demonstrate that MetaSpatial significantly enhances the spatial consistency and formatting stability of various scale models. Post-training, object placements are more realistic, aligned, and functionally coherent, validating the effectiveness of RL for 3D spatial reasoning in metaverse, AR/VR, digital twins, and game development applications. Our code, data, and training pipeline are publicly available at https://github.com/PzySeere/MetaSpatial.

Summary

AI-Generated Summary

PDF32March 25, 2025