PhysRig:基於可微分物理的蒙皮與骨架框架,用於實現逼真的關節物體建模
PhysRig: Differentiable Physics-Based Skinning and Rigging Framework for Realistic Articulated Object Modeling
June 26, 2025
作者: Hao Zhang, Haolan Xu, Chun Feng, Varun Jampani, Narendra Ahuja
cs.AI
摘要
蒙皮与骨骼绑定是动画、关节物体重建、运动迁移及四维生成中的基础组件。现有方法主要依赖于线性混合蒙皮(LBS),因其简单且可微分。然而,LBS会引入体积损失和非自然变形等伪影,且无法模拟如软组织、毛发及柔性附属物(如象鼻、耳朵和脂肪组织)等弹性材料。在本研究中,我们提出了PhysRig:一种基于物理的可微分蒙皮与骨骼绑定框架,通过将刚性骨骼嵌入体积表示(如四面体网格)中,将其模拟为由动画骨骼驱动的可变形软体结构,从而克服了这些限制。我们的方法利用连续介质力学,将物体离散化为嵌入欧拉背景网格中的粒子,确保了对材料属性和骨骼运动的可微分性。此外,我们引入了材料原型,显著减少了学习空间,同时保持了高表现力。为了评估我们的框架,我们利用来自Objaverse、The Amazing Animals Zoo和MixaMo的网格构建了一个全面的合成数据集,涵盖了多样化的物体类别和运动模式。我们的方法在生成更真实、物理上更合理的结果方面,始终优于传统的基于LBS的方法。此外,我们展示了该框架在姿态迁移任务中的适用性,突显了其在关节物体建模中的多功能性。
English
Skinning and rigging are fundamental components in animation, articulated
object reconstruction, motion transfer, and 4D generation. Existing approaches
predominantly rely on Linear Blend Skinning (LBS), due to its simplicity and
differentiability. However, LBS introduces artifacts such as volume loss and
unnatural deformations, and it fails to model elastic materials like soft
tissues, fur, and flexible appendages (e.g., elephant trunks, ears, and fatty
tissues). In this work, we propose PhysRig: a differentiable physics-based
skinning and rigging framework that overcomes these limitations by embedding
the rigid skeleton into a volumetric representation (e.g., a tetrahedral mesh),
which is simulated as a deformable soft-body structure driven by the animated
skeleton. Our method leverages continuum mechanics and discretizes the object
as particles embedded in an Eulerian background grid to ensure
differentiability with respect to both material properties and skeletal motion.
Additionally, we introduce material prototypes, significantly reducing the
learning space while maintaining high expressiveness. To evaluate our
framework, we construct a comprehensive synthetic dataset using meshes from
Objaverse, The Amazing Animals Zoo, and MixaMo, covering diverse object
categories and motion patterns. Our method consistently outperforms traditional
LBS-based approaches, generating more realistic and physically plausible
results. Furthermore, we demonstrate the applicability of our framework in the
pose transfer task highlighting its versatility for articulated object
modeling.