ImagenHub:标准化条件图像生成模型的评估
ImagenHub: Standardizing the evaluation of conditional image generation models
October 2, 2023
作者: Max Ku, Tianle Li, Kai Zhang, Yujie Lu, Xingyu Fu, Wenwen Zhuang, Wenhu Chen
cs.AI
摘要
最近,已经开发了大量的条件图像生成和编辑模型,以服务于不同的下游任务,包括文本到图像生成、文本引导图像编辑、主题驱动图像生成、控制引导图像生成等。然而,我们观察到在实验条件方面存在巨大的不一致性:数据集、推理和评估指标的不同使公平比较变得困难。本文提出了ImagenHub,这是一个一站式库,用于规范所有条件图像生成模型的推理和评估。首先,我们定义了七个突出的任务,并为它们策划了高质量的评估数据集。其次,我们建立了统一的推理流程,以确保公平比较。第三,我们设计了两个人类评估分数,即语义一致性和感知质量,以及全面的评估生成图像的指南。我们训练专家评估员根据提出的指标评估模型输出。我们的人类评估在76%的模型上实现了较高的克里彭多夫α系数的工作者间一致性。我们全面评估了约30个模型,并观察到三个关键点:(1)现有模型的性能通常令人不满意,除了文本引导图像生成和主题驱动图像生成外,74%的模型的总体得分低于0.5。 (2)我们审查了已发表论文中的声明,发现83%的声明是成立的,但也有少数例外。 (3)除主题驱动图像生成外,现有的自动评估指标中没有一个具有高于0.2的斯皮尔曼相关性。未来,我们将继续努力评估新发布的模型,并更新我们的排行榜,以跟踪条件图像生成领域的进展。
English
Recently, a myriad of conditional image generation and editing models have
been developed to serve different downstream tasks, including text-to-image
generation, text-guided image editing, subject-driven image generation,
control-guided image generation, etc. However, we observe huge inconsistencies
in experimental conditions: datasets, inference, and evaluation metrics -
render fair comparisons difficult. This paper proposes ImagenHub, which is a
one-stop library to standardize the inference and evaluation of all the
conditional image generation models. Firstly, we define seven prominent tasks
and curate high-quality evaluation datasets for them. Secondly, we built a
unified inference pipeline to ensure fair comparison. Thirdly, we design two
human evaluation scores, i.e. Semantic Consistency and Perceptual Quality,
along with comprehensive guidelines to evaluate generated images. We train
expert raters to evaluate the model outputs based on the proposed metrics. Our
human evaluation achieves a high inter-worker agreement of Krippendorff's alpha
on 76% models with a value higher than 0.4. We comprehensively evaluated a
total of around 30 models and observed three key takeaways: (1) the existing
models' performance is generally unsatisfying except for Text-guided Image
Generation and Subject-driven Image Generation, with 74% models achieving an
overall score lower than 0.5. (2) we examined the claims from published papers
and found 83% of them hold with a few exceptions. (3) None of the existing
automatic metrics has a Spearman's correlation higher than 0.2 except
subject-driven image generation. Moving forward, we will continue our efforts
to evaluate newly published models and update our leaderboard to keep track of
the progress in conditional image generation.