无调谐噪声矫正用于高保真图像到视频生成
Tuning-Free Noise Rectification for High Fidelity Image-to-Video Generation
March 5, 2024
作者: Weijie Li, Litong Gong, Yiran Zhu, Fanda Fan, Biao Wang, Tiezheng Ge, Bo Zheng
cs.AI
摘要
图像到视频(I2V)生成任务在开放域中始终面临保持高保真度的困难。传统的图像动画技术主要专注于特定领域,如人脸或人体姿势,这使得它们难以推广到开放域。基于扩散模型的几种最近的I2V框架可以为开放域图像生成动态内容,但无法保持保真度。我们发现低保真度的两个主要因素是图像细节的丢失和去噪过程中的噪声预测偏差。因此,我们提出了一种有效的方法,可应用于主流视频扩散模型。该方法通过补充更精确的图像信息和噪声校正来实现高保真度。具体而言,对于给定的图像,我们的方法首先向输入图像潜在部分添加噪声以保留更多细节,然后通过适当的校正去除噪声潜在部分,以减轻噪声预测偏差。我们的方法无需调整即可直接使用。实验结果表明了我们方法在提高生成视频保真度方面的有效性。有关更多图像到视频生成结果,请参阅项目网站:https://noise-rectification.github.io。
English
Image-to-video (I2V) generation tasks always suffer from keeping high
fidelity in the open domains. Traditional image animation techniques primarily
focus on specific domains such as faces or human poses, making them difficult
to generalize to open domains. Several recent I2V frameworks based on diffusion
models can generate dynamic content for open domain images but fail to maintain
fidelity. We found that two main factors of low fidelity are the loss of image
details and the noise prediction biases during the denoising process. To this
end, we propose an effective method that can be applied to mainstream video
diffusion models. This method achieves high fidelity based on supplementing
more precise image information and noise rectification. Specifically, given a
specified image, our method first adds noise to the input image latent to keep
more details, then denoises the noisy latent with proper rectification to
alleviate the noise prediction biases. Our method is tuning-free and
plug-and-play. The experimental results demonstrate the effectiveness of our
approach in improving the fidelity of generated videos. For more image-to-video
generated results, please refer to the project website:
https://noise-rectification.github.io.