ChatPaper.aiChatPaper

ATLAS:解耦骨骼与形态参数,实现富有表现力的参数化人体建模

ATLAS: Decoupling Skeletal and Shape Parameters for Expressive Parametric Human Modeling

August 21, 2025
作者: Jinhyung Park, Javier Romero, Shunsuke Saito, Fabian Prada, Takaaki Shiratori, Yichen Xu, Federica Bogo, Shoou-I Yu, Kris Kitani, Rawal Khirodkar
cs.AI

摘要

参数化人体模型提供了跨多种姿态、体型和面部表情的丰富三维人体表征,通常通过学习已配准三维网格的基础来实现。然而,现有的人体网格建模方法在捕捉多样姿态和体型间的细微变化方面存在困难,这主要归因于训练数据多样性的不足以及建模假设的局限性。此外,常见范式首先利用线性基优化外部体表,随后从表面顶点回归内部骨骼关节。这种方法在内部骨骼与外部软组织之间引入了问题性的依赖关系,限制了对身高和骨骼长度的直接控制。为解决这些问题,我们提出了ATLAS,这是一个从240台同步相机捕获的60万张高分辨率扫描中学习到的高保真人体模型。与以往方法不同,我们通过将网格表征建立在人体骨骼基础上,明确解耦了形状基与骨骼基。这种解耦增强了形状的表现力,实现了身体属性的细粒度定制,并使得关键点拟合独立于外部软组织特征。ATLAS在拟合未见过的多样化姿态个体时表现优于现有方法,定量评估显示,与线性模型相比,我们的非线性姿态校正更有效地捕捉了复杂姿态。
English
Parametric body models offer expressive 3D representation of humans across a wide range of poses, shapes, and facial expressions, typically derived by learning a basis over registered 3D meshes. However, existing human mesh modeling approaches struggle to capture detailed variations across diverse body poses and shapes, largely due to limited training data diversity and restrictive modeling assumptions. Moreover, the common paradigm first optimizes the external body surface using a linear basis, then regresses internal skeletal joints from surface vertices. This approach introduces problematic dependencies between internal skeleton and outer soft tissue, limiting direct control over body height and bone lengths. To address these issues, we present ATLAS, a high-fidelity body model learned from 600k high-resolution scans captured using 240 synchronized cameras. Unlike previous methods, we explicitly decouple the shape and skeleton bases by grounding our mesh representation in the human skeleton. This decoupling enables enhanced shape expressivity, fine-grained customization of body attributes, and keypoint fitting independent of external soft-tissue characteristics. ATLAS outperforms existing methods by fitting unseen subjects in diverse poses more accurately, and quantitative evaluations show that our non-linear pose correctives more effectively capture complex poses compared to linear models.
PDF122August 22, 2025