逆编修:基于循环一致性的高效快速图像编辑模型
Inverse-and-Edit: Effective and Fast Image Editing by Cycle Consistency Models
June 23, 2025
作者: Ilia Beletskii, Andrey Kuznetsov, Aibek Alanov
cs.AI
摘要
近期,基于扩散模型的图像编辑技术取得了显著进展,提供了对生成过程的精细控制。然而,由于这些方法的迭代特性,其计算成本较高。尽管蒸馏扩散模型能够加速推理,但其编辑能力仍受限于较差的反演质量。高保真反演与重建对于精确图像编辑至关重要,因为它们保持了源图像的结构和语义完整性。在本研究中,我们提出了一种新颖的框架,通过一致性模型增强图像反演,仅需四步即可实现高质量编辑。我们的方法引入了循环一致性优化策略,显著提升了重建精度,并在可编辑性与内容保留之间实现了可控的权衡。我们在多种图像编辑任务和数据集上达到了最先进的性能,证明我们的方法在保持或超越全步扩散模型的同时,显著提高了效率。本方法的代码已发布于GitHub,地址为https://github.com/ControlGenAI/Inverse-and-Edit。
English
Recent advances in image editing with diffusion models have achieved
impressive results, offering fine-grained control over the generation process.
However, these methods are computationally intensive because of their iterative
nature. While distilled diffusion models enable faster inference, their editing
capabilities remain limited, primarily because of poor inversion quality.
High-fidelity inversion and reconstruction are essential for precise image
editing, as they preserve the structural and semantic integrity of the source
image. In this work, we propose a novel framework that enhances image inversion
using consistency models, enabling high-quality editing in just four steps. Our
method introduces a cycle-consistency optimization strategy that significantly
improves reconstruction accuracy and enables a controllable trade-off between
editability and content preservation. We achieve state-of-the-art performance
across various image editing tasks and datasets, demonstrating that our method
matches or surpasses full-step diffusion models while being substantially more
efficient. The code of our method is available on GitHub at
https://github.com/ControlGenAI/Inverse-and-Edit.