ChatPaper.aiChatPaper

PhysX:基於物理的3D資產生成

PhysX: Physical-Grounded 3D Asset Generation

July 16, 2025
作者: Ziang Cao, Zhaoxi Chen, Linag Pan, Ziwei Liu
cs.AI

摘要

3D建模正從虛擬走向實體。現有的3D生成技術主要關注幾何形狀與紋理,而忽視了基於物理的建模。因此,儘管3D生成模型發展迅速,合成的3D資產往往忽略了豐富且重要的物理屬性,這阻礙了它們在模擬和具身AI等物理領域的實際應用。作為應對這一挑戰的初步嘗試,我們提出了PhysX,一種端到端的基於物理的3D資產生成範式。1) 為彌補物理註釋3D數據集的關鍵缺口,我們推出了PhysXNet——首個系統性註釋五大基礎維度的基於物理的3D數據集:絕對尺度、材質、功能可能性、運動學及功能描述。特別地,我們設計了一種基於視覺語言模型的可擴展人機協同註釋流程,能夠高效地從原始3D資產創建物理優先的資產。2) 此外,我們提出了PhysXGen,一個基於物理的圖像到3D資產生成的前饋框架,將物理知識注入預訓練的3D結構空間中。具體而言,PhysXGen採用雙分支架構,顯式建模3D結構與物理屬性之間的潛在關聯,從而生成具有合理物理預測且保持原有幾何質量的3D資產。大量實驗驗證了我們框架的優越性能和廣闊的泛化能力。所有代碼、數據和模型將被公開,以促進生成式物理AI的未來研究。
English
3D modeling is moving from virtual to physical. Existing 3D generation primarily emphasizes geometries and textures while neglecting physical-grounded modeling. Consequently, despite the rapid development of 3D generative models, the synthesized 3D assets often overlook rich and important physical properties, hampering their real-world application in physical domains like simulation and embodied AI. As an initial attempt to address this challenge, we propose PhysX, an end-to-end paradigm for physical-grounded 3D asset generation. 1) To bridge the critical gap in physics-annotated 3D datasets, we present PhysXNet - the first physics-grounded 3D dataset systematically annotated across five foundational dimensions: absolute scale, material, affordance, kinematics, and function description. In particular, we devise a scalable human-in-the-loop annotation pipeline based on vision-language models, which enables efficient creation of physics-first assets from raw 3D assets.2) Furthermore, we propose PhysXGen, a feed-forward framework for physics-grounded image-to-3D asset generation, injecting physical knowledge into the pre-trained 3D structural space. Specifically, PhysXGen employs a dual-branch architecture to explicitly model the latent correlations between 3D structures and physical properties, thereby producing 3D assets with plausible physical predictions while preserving the native geometry quality. Extensive experiments validate the superior performance and promising generalization capability of our framework. All the code, data, and models will be released to facilitate future research in generative physical AI.
PDF231July 17, 2025