Part-X-MLLM:具部件感知能力的3D多模態大型語言模型
Part-X-MLLM: Part-aware 3D Multimodal Large Language Model
November 17, 2025
作者: Chunshi Wang, Junliang Ye, Yunhan Yang, Yang Li, Zizhuo Lin, Jun Zhu, Zhuo Chen, Yawei Luo, Chunchao Guo
cs.AI
摘要
我們推出 Part-X-MLLM,這是一個原生 3D 多模態大型語言模型,透過將多樣化的 3D 任務表述為結構化可執行程式語法來實現統一。給定 RGB 點雲與自然語言提示,我們的模型能以自回歸方式生成單一連貫的詞元序列,編碼零件級邊界框、語義描述與編輯指令。此結構化輸出作為多功能介面,驅動下游幾何感知模組實現基於零件的生成與編輯。透過將符號規劃與幾何合成解耦,我們的方法允許任何相容的幾何引擎透過單一語言原生前端進行控制。我們預訓練雙編碼器架構以分離結構與語義,並在大規模以零件為中心的資料集上對模型進行指令微調。實驗表明,我們的模型能出色生成高品質結構化規劃,透過統一介面在接地問答、組合生成與局部化編輯任務中實現最先進效能。專案頁面:https://chunshi.wang/Part-X-MLLM/
English
We introduce Part-X-MLLM, a native 3D multimodal large language model that unifies diverse 3D tasks by formulating them as programs in a structured, executable grammar. Given an RGB point cloud and a natural language prompt, our model autoregressively generates a single, coherent token sequence encoding part-level bounding boxes, semantic descriptions, and edit commands. This structured output serves as a versatile interface to drive downstream geometry-aware modules for part-based generation and editing. By decoupling the symbolic planning from the geometric synthesis, our approach allows any compatible geometry engine to be controlled through a single, language-native frontend. We pre-train a dual-encoder architecture to disentangle structure from semantics and instruction-tune the model on a large-scale, part-centric dataset. Experiments demonstrate that our model excels at producing high-quality, structured plans, enabling state-of-the-art performance in grounded Q\&A, compositional generation, and localized editing through one unified interface. Project page: https://chunshi.wang/Part-X-MLLM/