ChatPaper.aiChatPaper

骨架高斯:基于高斯骨架化的可编辑四维生成技术

SkeletonGaussian: Editable 4D Generation through Gaussian Skeletonization

February 4, 2026
作者: Lifan Wu, Ruijie Zhu, Yubo Ai, Tianzhu Zhang
cs.AI

摘要

4D生成技术从输入文本、图像或视频合成动态3D物体已取得显著进展。然而,现有方法常将运动表示为隐式变形场,限制了直接控制与可编辑性。为此,我们提出SkeletonGaussian——一个从单目视频输入生成可编辑动态3D高斯模型的新框架。该方法引入分层铰接式表征,将运动显式解耦为由骨骼驱动的稀疏刚性运动与细粒度非刚性运动。具体而言,我们通过线性混合蒙皮提取鲁棒骨骼并驱动刚性运动,再利用基于六平面结构的优化器处理非刚性变形,从而提升可解释性与可编辑性。实验结果表明,SkeletonGaussian在生成质量上超越现有方法,同时支持直观的运动编辑,为可编辑4D生成建立了新范式。项目页面:https://wusar.github.io/projects/skeletongaussian/
English
4D generation has made remarkable progress in synthesizing dynamic 3D objects from input text, images, or videos. However, existing methods often represent motion as an implicit deformation field, which limits direct control and editability. To address this issue, we propose SkeletonGaussian, a novel framework for generating editable dynamic 3D Gaussians from monocular video input. Our approach introduces a hierarchical articulated representation that decomposes motion into sparse rigid motion explicitly driven by a skeleton and fine-grained non-rigid motion. Concretely, we extract a robust skeleton and drive rigid motion via linear blend skinning, followed by a hexplane-based refinement for non-rigid deformations, enhancing interpretability and editability. Experimental results demonstrate that SkeletonGaussian surpasses existing methods in generation quality while enabling intuitive motion editing, establishing a new paradigm for editable 4D generation. Project page: https://wusar.github.io/projects/skeletongaussian/
PDF11February 6, 2026