ChatPaper.aiChatPaper

代碼:自信普通微分編輯

CODE: Confident Ordinary Differential Editing

August 22, 2024
作者: Bastien van Delft, Tommaso Martorella, Alexandre Alahi
cs.AI

摘要

條件圖像生成有助於實現無縫編輯和創建逼真圖像。然而,在噪聲或超出分佈範圍(OoD)的圖像上進行條件生成存在顯著挑戰,特別是在平衡對輸入的忠實度和輸出的逼真度方面。我們提出了自信普通微分編輯(CODE),這是一種新穎的圖像合成方法,能夠有效處理OoD引導圖像。CODE利用擴散模型作為生成先驗,通過沿著概率流動普通微分方程(ODE)軌跡的基於分數的更新來增強圖像。該方法不需要任務特定的訓練,也不需要手工設計的模塊,也不對影響條件圖像的損壞做出任何假設。我們的方法與任何擴散模型兼容。CODE處於條件圖像生成和盲目圖像恢復的交集,以完全盲目的方式運行,僅依賴預先訓練的生成模型。我們的方法提出了一種盲目恢復的替代方法:代替基於對底層損壞的假設來定位特定的真實圖像,CODE旨在增加輸入圖像的可能性,同時保持忠實度。這導致在輸入周圍最可能的分佈內圖像。我們的貢獻有兩個方面。首先,CODE基於ODE引入了一種新穎的編輯方法,相較於基於SDE的對應方法,提供了更好的控制、逼真度和忠實度。其次,我們引入了基於置信區間的剪切方法,通過允許CODE忽略某些像素或信息,從而增強了盲目恢復過程的效果。實驗結果證明了CODE在現有方法中的有效性,特別是在涉及嚴重退化或OoD輸入的情況下。
English
Conditioning image generation facilitates seamless editing and the creation of photorealistic images. However, conditioning on noisy or Out-of-Distribution (OoD) images poses significant challenges, particularly in balancing fidelity to the input and realism of the output. We introduce Confident Ordinary Differential Editing (CODE), a novel approach for image synthesis that effectively handles OoD guidance images. Utilizing a diffusion model as a generative prior, CODE enhances images through score-based updates along the probability-flow Ordinary Differential Equation (ODE) trajectory. This method requires no task-specific training, no handcrafted modules, and no assumptions regarding the corruptions affecting the conditioning image. Our method is compatible with any diffusion model. Positioned at the intersection of conditional image generation and blind image restoration, CODE operates in a fully blind manner, relying solely on a pre-trained generative model. Our method introduces an alternative approach to blind restoration: instead of targeting a specific ground truth image based on assumptions about the underlying corruption, CODE aims to increase the likelihood of the input image while maintaining fidelity. This results in the most probable in-distribution image around the input. Our contributions are twofold. First, CODE introduces a novel editing method based on ODE, providing enhanced control, realism, and fidelity compared to its SDE-based counterpart. Second, we introduce a confidence interval-based clipping method, which improves CODE's effectiveness by allowing it to disregard certain pixels or information, thus enhancing the restoration process in a blind manner. Experimental results demonstrate CODE's effectiveness over existing methods, particularly in scenarios involving severe degradation or OoD inputs.

Summary

AI-Generated Summary

PDF42November 16, 2024