Ani3DHuman:基于自引导随机采样的照片级真实感三维人体动画
Ani3DHuman: Photorealistic 3D Human Animation with Self-guided Stochastic Sampling
February 22, 2026
作者: Qi Sun, Can Wang, Jiaxiang Shang, Yingchun Liu, Jing Liao
cs.AI
摘要
当前的三维人体动画方法难以实现照片级真实感:基于运动学的方法缺乏非刚性动态(如衣物动力学),而利用视频扩散先验的技术虽能合成非刚性运动,却存在质量瑕疵和身份特征丢失问题。为突破这些局限,我们提出Ani3DHuman框架,将基于运动学的动画与视频扩散先验相融合。我们首先引入分层运动表征,将刚性运动与残余非刚性运动解耦。刚性运动由运动学方法生成,并生成粗糙渲染结果以指导视频扩散模型生成恢复残余非刚性运动的视频序列。然而,基于扩散采样的复原任务极具挑战性——初始渲染结果属于分布外数据,导致标准确定性ODE采样器失效。为此,我们提出新型自引导随机采样方法,通过结合随机采样(实现照片级质量)与自引导机制(保持身份特征保真度),有效解决分布外问题。这些复原后的视频提供高质量监督信号,使得残余非刚性运动场的优化成为可能。大量实验表明,本方法能生成照片级真实感的三维人体动画,性能超越现有方法。代码已开源:https://github.com/qiisun/ani3dhuman。
English
Current 3D human animation methods struggle to achieve photorealism: kinematics-based approaches lack non-rigid dynamics (e.g., clothing dynamics), while methods that leverage video diffusion priors can synthesize non-rigid motion but suffer from quality artifacts and identity loss. To overcome these limitations, we present Ani3DHuman, a framework that marries kinematics-based animation with video diffusion priors. We first introduce a layered motion representation that disentangles rigid motion from residual non-rigid motion. Rigid motion is generated by a kinematic method, which then produces a coarse rendering to guide the video diffusion model in generating video sequences that restore the residual non-rigid motion. However, this restoration task, based on diffusion sampling, is highly challenging, as the initial renderings are out-of-distribution, causing standard deterministic ODE samplers to fail. Therefore, we propose a novel self-guided stochastic sampling method, which effectively addresses the out-of-distribution problem by combining stochastic sampling (for photorealistic quality) with self-guidance (for identity fidelity). These restored videos provide high-quality supervision, enabling the optimization of the residual non-rigid motion field. Extensive experiments demonstrate that \MethodName can generate photorealistic 3D human animation, outperforming existing methods. Code is available in https://github.com/qiisun/ani3dhuman.