ChatPaper.aiChatPaper

Loopy: 通过长期运动依赖关系驯服音频驱动的肖像化头像

Loopy: Taming Audio-Driven Portrait Avatar with Long-Term Motion Dependency

September 4, 2024
作者: Jianwen Jiang, Chao Liang, Jiaqi Yang, Gaojie Lin, Tianyun Zhong, Yanbo Zheng
cs.AI

摘要

随着基于扩散的视频生成技术的引入,最近音频条件下的人类视频生成在运动自然性和肖像细节合成方面取得了重大突破。由于在驱动人类运动中音频信号的控制受限,现有方法通常会添加辅助空间信号来稳定运动,这可能会影响运动的自然性和自由度。在本文中,我们提出了一种名为Loopy的端到端仅音频条件视频扩散模型。具体来说,我们设计了一个片内和片间时间模块以及一个音频到潜变量模块,使模型能够利用数据中的长期运动信息来学习自然运动模式,并改善音频-肖像运动相关性。这种方法消除了现有方法中用于在推断过程中约束运动的手动指定空间运动模板的需求。大量实验证明,Loopy优于最近的音频驱动肖像扩散模型,在各种场景中提供更逼真和高质量的结果。
English
With the introduction of diffusion-based video generation techniques, audio-conditioned human video generation has recently achieved significant breakthroughs in both the naturalness of motion and the synthesis of portrait details. Due to the limited control of audio signals in driving human motion, existing methods often add auxiliary spatial signals to stabilize movements, which may compromise the naturalness and freedom of motion. In this paper, we propose an end-to-end audio-only conditioned video diffusion model named Loopy. Specifically, we designed an inter- and intra-clip temporal module and an audio-to-latents module, enabling the model to leverage long-term motion information from the data to learn natural motion patterns and improving audio-portrait movement correlation. This method removes the need for manually specified spatial motion templates used in existing methods to constrain motion during inference. Extensive experiments show that Loopy outperforms recent audio-driven portrait diffusion models, delivering more lifelike and high-quality results across various scenarios.

Summary

AI-Generated Summary

PDF9813November 16, 2024