DreaMoving:基于扩散模型的人类舞蹈视频生成框架
DreaMoving: A Human Dance Video Generation Framework based on Diffusion Models
December 8, 2023
作者: Mengyang Feng, Jinlin Liu, Kai Yu, Yuan Yao, Zheng Hui, Xiefan Guo, Xianhui Lin, Haolan Xue, Chen Shi, Xiaowen Li, Aojie Li, Miaomiao Cui, Peiran Ren, Xuansong Xie
cs.AI
摘要
本文介绍了DreaMoving,这是一个基于扩散的可控视频生成框架,用于生成高质量定制的人类舞蹈视频。具体来说,给定目标身份和姿势序列,DreaMoving能够生成一个目标身份在任何地方跳舞的视频,由姿势序列驱动。为此,我们提出了一个视频控制网络(Video ControlNet)用于运动控制,以及一个内容引导器(Content Guider)用于保持身份。所提出的模型易于使用,并可适应大多数风格化扩散模型,以生成多样化的结果。项目页面可在https://dreamoving.github.io/dreamoving找到。
English
In this paper, we present DreaMoving, a diffusion-based controllable video
generation framework to produce high-quality customized human dance videos.
Specifically, given target identity and posture sequences, DreaMoving can
generate a video of the target identity dancing anywhere driven by the posture
sequences. To this end, we propose a Video ControlNet for motion-controlling
and a Content Guider for identity preserving. The proposed model is easy to use
and can be adapted to most stylized diffusion models to generate diverse
results. The project page is available at
https://dreamoving.github.io/dreamoving.