ChatPaper.aiChatPaper

SpotEdit:视觉引导图像编辑方法的评估

SpotEdit: Evaluating Visually-Guided Image Editing Methods

August 25, 2025
作者: Sara Ghazanfari, Wei-An Lin, Haitong Tian, Ersin Yumer
cs.AI

摘要

视觉引导的图像编辑,即编辑过程同时依赖于视觉线索和文本提示,已成为一种实现精细可控内容生成的强大范式。尽管近期生成模型展现出显著能力,但现有评估方法仍显简单,不足以充分反映实际编辑挑战。我们推出SpotEdit,这是一个旨在系统评估视觉引导图像编辑方法的综合基准,涵盖多种扩散模型、自回归模型及混合生成模型,揭示了显著的性能差异。针对一个关键但尚未充分探索的挑战,我们的基准特别包含了对幻觉现象的专门评估,揭示了如GPT-4o等领先模型常会幻觉视觉线索的存在并错误执行编辑任务。我们的代码与基准已公开发布于https://github.com/SaraGhazanfari/SpotEdit。
English
Visually-guided image editing, where edits are conditioned on both visual cues and textual prompts, has emerged as a powerful paradigm for fine-grained, controllable content generation. Although recent generative models have shown remarkable capabilities, existing evaluations remain simple and insufficiently representative of real-world editing challenges. We present SpotEdit, a comprehensive benchmark designed to systematically assess visually-guided image editing methods across diverse diffusion, autoregressive, and hybrid generative models, uncovering substantial performance disparities. To address a critical yet underexplored challenge, our benchmark includes a dedicated component on hallucination, highlighting how leading models, such as GPT-4o, often hallucinate the existence of a visual cue and erroneously perform the editing task. Our code and benchmark are publicly released at https://github.com/SaraGhazanfari/SpotEdit.
PDF12August 26, 2025