ChatPaper.aiChatPaper

DreaMoving:基於擴散模型的人類舞蹈視頻生成框架

DreaMoving: A Human Dance Video Generation Framework based on Diffusion Models

December 8, 2023
作者: Mengyang Feng, Jinlin Liu, Kai Yu, Yuan Yao, Zheng Hui, Xiefan Guo, Xianhui Lin, Haolan Xue, Chen Shi, Xiaowen Li, Aojie Li, Miaomiao Cui, Peiran Ren, Xuansong Xie
cs.AI

摘要

本文介紹了DreaMoving,一個基於擴散的可控視頻生成框架,用於製作高質量定制的人類舞蹈視頻。具體而言,給定目標身份和姿勢序列,DreaMoving能夠生成一個展示目標身份在任何地方跳舞的視頻,受到姿勢序列驅動。為此,我們提出了一個用於運動控制的Video ControlNet和一個用於保留身份的Content Guider。所提出的模型易於使用,並且可以適應大多數風格化擴散模型以生成多樣化的結果。項目頁面可在https://dreamoving.github.io/dreamoving找到。
English
In this paper, we present DreaMoving, a diffusion-based controllable video generation framework to produce high-quality customized human dance videos. Specifically, given target identity and posture sequences, DreaMoving can generate a video of the target identity dancing anywhere driven by the posture sequences. To this end, we propose a Video ControlNet for motion-controlling and a Content Guider for identity preserving. The proposed model is easy to use and can be adapted to most stylized diffusion models to generate diverse results. The project page is available at https://dreamoving.github.io/dreamoving.
PDF3811December 15, 2024