ChatPaper.aiChatPaper

VoMP:体积力学特性场预测

VoMP: Predicting Volumetric Mechanical Property Fields

October 27, 2025
作者: Rishit Dagli, Donglai Xiang, Vismay Modi, Charles Loop, Clement Fuji Tsang, Anka He Chen, Anita Hu, Gavriel State, David I. W. Levin, Maria Shugrina
cs.AI

摘要

物理模拟依赖于空间变化的力学属性,这些属性通常需要耗费大量精力手动制作。VoMP作为一种前馈式方法,通过训练能够预测三维物体整体体积范围内的杨氏模量(E)、泊松比(ν)和密度(ρ),适用于任何可渲染并体素化的三维表示形式。该方法通过聚合每个体素的多视角特征,并将其输入经过训练的几何变换器,以预测各体素的材料潜在编码。这些潜在编码位于物理合理材料构成的流形上,该流形通过真实世界数据集学习得到,确保了解码后各体素材料的有效性。为获取物体级训练数据,我们提出结合分割三维数据集、材料数据库和视觉语言模型知识的标注流程,并建立了新基准测试。实验表明,VoMP能精准估算体积属性,在精度与速度上远超现有技术。
English
Physical simulation relies on spatially-varying mechanical properties, often laboriously hand-crafted. VoMP is a feed-forward method trained to predict Young's modulus (E), Poisson's ratio (nu), and density (rho) throughout the volume of 3D objects, in any representation that can be rendered and voxelized. VoMP aggregates per-voxel multi-view features and passes them to our trained Geometry Transformer to predict per-voxel material latent codes. These latents reside on a manifold of physically plausible materials, which we learn from a real-world dataset, guaranteeing the validity of decoded per-voxel materials. To obtain object-level training data, we propose an annotation pipeline combining knowledge from segmented 3D datasets, material databases, and a vision-language model, along with a new benchmark. Experiments show that VoMP estimates accurate volumetric properties, far outperforming prior art in accuracy and speed.
PDF61December 31, 2025