ChatPaper.aiChatPaper

TCAN:使用扩散模型对人类图像进行具有时间一致姿势引导的动画化

TCAN: Animating Human Images with Temporally Consistent Pose Guidance using Diffusion Models

July 12, 2024
作者: Jeongho Kim, Min-Jung Kim, Junsoo Lee, Jaegul Choo
cs.AI

摘要

基于姿势驱动的人体图像动画扩散模型在逼真人体视频合成方面展现出卓越能力。尽管先前方法取得了令人期待的成果,但在实现时间上连贯的动画和确保与现成姿势检测器的稳健性方面仍存在挑战。本文提出了TCAN,一种姿势驱动的人体图像动画方法,对错误姿势具有稳健性,并能保持时间上的一致性。与先前方法不同,我们利用预训练的ControlNet而无需微调,以利用其从众多姿势-图像-标题三元组中预先获取的丰富知识。为了保持ControlNet的冻结状态,我们将LoRA调整到UNet层,使网络能够对齐姿势和外观特征之间的潜在空间。此外,通过向ControlNet引入额外的时间层,增强了对姿势检测器异常值的稳健性。通过分析沿时间轴的注意力图,我们还设计了一种利用姿势信息的新型温度图,实现更静态的背景。大量实验证明,所提出的方法在涵盖各种姿势(如卡通)的视频合成任务中取得了令人期待的结果。项目页面:https://eccv2024tcan.github.io/
English
Pose-driven human-image animation diffusion models have shown remarkable capabilities in realistic human video synthesis. Despite the promising results achieved by previous approaches, challenges persist in achieving temporally consistent animation and ensuring robustness with off-the-shelf pose detectors. In this paper, we present TCAN, a pose-driven human image animation method that is robust to erroneous poses and consistent over time. In contrast to previous methods, we utilize the pre-trained ControlNet without fine-tuning to leverage its extensive pre-acquired knowledge from numerous pose-image-caption pairs. To keep the ControlNet frozen, we adapt LoRA to the UNet layers, enabling the network to align the latent space between the pose and appearance features. Additionally, by introducing an additional temporal layer to the ControlNet, we enhance robustness against outliers of the pose detector. Through the analysis of attention maps over the temporal axis, we also designed a novel temperature map leveraging pose information, allowing for a more static background. Extensive experiments demonstrate that the proposed method can achieve promising results in video synthesis tasks encompassing various poses, like chibi. Project Page: https://eccv2024tcan.github.io/

Summary

AI-Generated Summary

PDF102November 28, 2024