ChatPaper.aiChatPaper

逐一对齐文本、图像与三维结构标记

Aligning Text, Images, and 3D Structure Token-by-Token

June 9, 2025
作者: Aadarsh Sahoo, Vansh Tibrewal, Georgia Gkioxari
cs.AI

摘要

創造能夠以三維視角理解世界的機器,對於協助設計師構建與編輯三維環境,以及機器人在三維空間中的導航與互動至關重要。受到語言與圖像建模進展的啟發,我們探索了自回歸模型在一個新領域——結構化三維場景中的潛力。為此,我們提出了一個統一的LLM框架,該框架對齊了語言、圖像與三維場景,並提供了一份詳盡的“操作指南”,闡述了實現最佳訓練與性能的關鍵設計選擇,涵蓋了數據表示、模態特定目標等核心問題。我們在四個核心三維任務——渲染、識別、指令遵循與問答——以及四個三維數據集(包括合成與真實世界數據)上評估了性能。我們通過量化形狀編碼豐富了三維模態,從而擴展了我們的方法以重建複雜的三維物體形狀,並展示了模型在真實世界三維物體識別任務中的有效性。項目網頁:https://glab-caltech.github.io/kyvo/
English
Creating machines capable of understanding the world in 3D is essential in assisting designers that build and edit 3D environments and robots navigating and interacting within a three-dimensional space. Inspired by advances in language and image modeling, we investigate the potential of autoregressive models for a new modality: structured 3D scenes. To this end, we propose a unified LLM framework that aligns language, images, and 3D scenes and provide a detailed ''cookbook'' outlining critical design choices for achieving optimal training and performance addressing key questions related to data representation, modality-specific objectives, and more. We evaluate performance across four core 3D tasks -- rendering, recognition, instruction-following, and question-answering -- and four 3D datasets, synthetic and real-world. We extend our approach to reconstruct complex 3D object shapes by enriching our 3D modality with quantized shape encodings, and show our model's effectiveness on real-world 3D object recognition tasks. Project webpage: https://glab-caltech.github.io/kyvo/
PDF192June 11, 2025