ChatPaper.aiChatPaper

ActionMesh:基于时序三维扩散模型的动态三维网格生成

ActionMesh: Animated 3D Mesh Generation with Temporal 3D Diffusion

January 22, 2026
作者: Remy Sabathier, David Novotny, Niloy J. Mitra, Tom Monnier
cs.AI

摘要

生成动态3D物体是众多应用的核心技术,然而当前最先进的研究成果往往因配置受限、运行耗时过长或生成质量有限而难以投入实际应用。本文提出ActionMesh——一种以前馈方式直接生成"动态呈现"且符合生产要求的3D网格的生成模型。受早期视频模型启发,我们的核心思路是通过为现有3D扩散模型引入时间维度,构建名为"时序3D扩散"的新框架。具体而言:首先改进3D扩散阶段,使其生成表征时序变化且相互独立的3D形状的同步潜变量序列;其次设计时序3D自编码器,将独立形状序列转换为预定义参考形状的对应形变,从而构建动画效果。通过整合这两个组件,ActionMesh能够根据单目视频、文本描述甚至结合文本动画提示的静态3D网格等多种输入生成动态3D网格。相较于现有方法,本方案具有速度快、无需骨骼绑定且保持拓扑一致等优势,支持快速迭代并实现贴图与重定向等无缝应用。我们在标准视频转4D基准数据集(Consistent4D、Objaverse)上的实验表明,该方法在几何精度与时序一致性方面均达到最先进水平,证明其能够以前所未有的速度和质量生成动态3D网格。
English
Generating animated 3D objects is at the heart of many applications, yet most advanced works are typically difficult to apply in practice because of their limited setup, their long runtime, or their limited quality. We introduce ActionMesh, a generative model that predicts production-ready 3D meshes "in action" in a feed-forward manner. Drawing inspiration from early video models, our key insight is to modify existing 3D diffusion models to include a temporal axis, resulting in a framework we dubbed "temporal 3D diffusion". Specifically, we first adapt the 3D diffusion stage to generate a sequence of synchronized latents representing time-varying and independent 3D shapes. Second, we design a temporal 3D autoencoder that translates a sequence of independent shapes into the corresponding deformations of a pre-defined reference shape, allowing us to build an animation. Combining these two components, ActionMesh generates animated 3D meshes from different inputs like a monocular video, a text description, or even a 3D mesh with a text prompt describing its animation. Besides, compared to previous approaches, our method is fast and produces results that are rig-free and topology consistent, hence enabling rapid iteration and seamless applications like texturing and retargeting. We evaluate our model on standard video-to-4D benchmarks (Consistent4D, Objaverse) and report state-of-the-art performances on both geometric accuracy and temporal consistency, demonstrating that our model can deliver animated 3D meshes with unprecedented speed and quality.
PDF52January 24, 2026