PhysRig:基于可微分物理的蒙皮与绑定框架,用于实现高真实度的关节物体建模
PhysRig: Differentiable Physics-Based Skinning and Rigging Framework for Realistic Articulated Object Modeling
June 26, 2025
作者: Hao Zhang, Haolan Xu, Chun Feng, Varun Jampani, Narendra Ahuja
cs.AI
摘要
蒙皮与骨骼绑定是动画、关节物体重建、动作迁移及四维生成中的核心要素。现有方法主要依赖于线性混合蒙皮(LBS),因其简洁性与可微性。然而,LBS会引发体积损失及不自然形变等问题,且无法有效模拟如软组织、毛发及柔性附属物(如象鼻、耳朵及脂肪组织)等弹性材料。本研究中,我们提出了PhysRig:一种基于物理的可微蒙皮与骨骼绑定框架,通过将刚性骨骼嵌入体积表示(如四面体网格)中,将其模拟为由动画骨骼驱动的可变形软体结构,从而克服了上述局限。我们的方法运用连续介质力学,将物体离散化为嵌入欧拉背景网格的粒子,确保了对材料属性与骨骼运动两方面的可微性。此外,我们引入了材料原型,在保持高表现力的同时显著缩减了学习空间。为评估该框架,我们利用来自Objaverse、The Amazing Animals Zoo及MixaMo的网格构建了一个全面的合成数据集,涵盖了多样化的物体类别与运动模式。我们的方法在生成更真实、物理上更合理的结果方面,持续超越传统基于LBS的方法。更进一步,我们展示了该框架在姿态迁移任务中的适用性,凸显了其在关节物体建模中的多功能性。
English
Skinning and rigging are fundamental components in animation, articulated
object reconstruction, motion transfer, and 4D generation. Existing approaches
predominantly rely on Linear Blend Skinning (LBS), due to its simplicity and
differentiability. However, LBS introduces artifacts such as volume loss and
unnatural deformations, and it fails to model elastic materials like soft
tissues, fur, and flexible appendages (e.g., elephant trunks, ears, and fatty
tissues). In this work, we propose PhysRig: a differentiable physics-based
skinning and rigging framework that overcomes these limitations by embedding
the rigid skeleton into a volumetric representation (e.g., a tetrahedral mesh),
which is simulated as a deformable soft-body structure driven by the animated
skeleton. Our method leverages continuum mechanics and discretizes the object
as particles embedded in an Eulerian background grid to ensure
differentiability with respect to both material properties and skeletal motion.
Additionally, we introduce material prototypes, significantly reducing the
learning space while maintaining high expressiveness. To evaluate our
framework, we construct a comprehensive synthetic dataset using meshes from
Objaverse, The Amazing Animals Zoo, and MixaMo, covering diverse object
categories and motion patterns. Our method consistently outperforms traditional
LBS-based approaches, generating more realistic and physically plausible
results. Furthermore, we demonstrate the applicability of our framework in the
pose transfer task highlighting its versatility for articulated object
modeling.