ChatPaper.aiChatPaper

自信普通微分编辑

CODE: Confident Ordinary Differential Editing

August 22, 2024
作者: Bastien van Delft, Tommaso Martorella, Alexandre Alahi
cs.AI

摘要

条件图像生成有助于实现无缝编辑和创作逼真图像。然而,在嘈杂或超出分布范围(OoD)的图像上进行条件处理会带来重大挑战,特别是在平衡对输入的忠实度和输出的逼真度方面。我们引入了自信普通微分编辑(CODE),这是一种新颖的图像合成方法,能有效处理OoD引导图像。CODE利用扩散模型作为生成先验,通过沿着概率流普通微分方程(ODE)轨迹进行基于分数的更新来增强图像。这种方法不需要特定任务的训练,也不需要手工模块,也不对影响条件图像的破坏做出任何假设。我们的方法与任何扩散模型兼容。CODE位于条件图像生成和盲图像恢复的交叉点,以完全盲目的方式运行,仅依赖于预训练的生成模型。我们的方法提出了一种盲目恢复的替代方法:不是针对基于对底层破坏的假设而定位特定的真实图像,而是旨在增加输入图像的可能性同时保持忠实度。这将导致在输入周围出现最可能的分布内图像。我们的贡献有两个方面。首先,CODE引入了基于ODE的新颖编辑方法,相比其基于SDE的对应物,提供了增强的控制、逼真度和忠实度。其次,我们引入了基于置信区间的剪切方法,通过允许忽略某些像素或信息,从而在盲目方式下增强恢复过程,提高了CODE的有效性。实验结果表明,CODE在现有方法中的有效性,特别是在涉及严重退化或OoD输入的情况下。
English
Conditioning image generation facilitates seamless editing and the creation of photorealistic images. However, conditioning on noisy or Out-of-Distribution (OoD) images poses significant challenges, particularly in balancing fidelity to the input and realism of the output. We introduce Confident Ordinary Differential Editing (CODE), a novel approach for image synthesis that effectively handles OoD guidance images. Utilizing a diffusion model as a generative prior, CODE enhances images through score-based updates along the probability-flow Ordinary Differential Equation (ODE) trajectory. This method requires no task-specific training, no handcrafted modules, and no assumptions regarding the corruptions affecting the conditioning image. Our method is compatible with any diffusion model. Positioned at the intersection of conditional image generation and blind image restoration, CODE operates in a fully blind manner, relying solely on a pre-trained generative model. Our method introduces an alternative approach to blind restoration: instead of targeting a specific ground truth image based on assumptions about the underlying corruption, CODE aims to increase the likelihood of the input image while maintaining fidelity. This results in the most probable in-distribution image around the input. Our contributions are twofold. First, CODE introduces a novel editing method based on ODE, providing enhanced control, realism, and fidelity compared to its SDE-based counterpart. Second, we introduce a confidence interval-based clipping method, which improves CODE's effectiveness by allowing it to disregard certain pixels or information, thus enhancing the restoration process in a blind manner. Experimental results demonstrate CODE's effectiveness over existing methods, particularly in scenarios involving severe degradation or OoD inputs.

Summary

AI-Generated Summary

PDF42November 16, 2024