ChatPaper.aiChatPaper

用自然语言描述图像集之间的差异

Describing Differences in Image Sets with Natural Language

December 5, 2023
作者: Lisa Dunlap, Yuhui Zhang, Xiaohan Wang, Ruiqi Zhong, Trevor Darrell, Jacob Steinhardt, Joseph E. Gonzalez, Serena Yeung-Levy
cs.AI

摘要

两组图像有何不同?识别集合级别的差异对于理解模型行为和分析数据集至关重要,然而,手动筛选成千上万张图像是不切实际的。为了辅助这一发现过程,我们探讨了自动描述两组图像之间差异的任务,我们将其称为集合差异字幕生成。这项任务接收图像集合 D_A 和 D_B,并输出一种在 D_A 上更常为真的描述。我们概述了一个两阶段方法,首先从图像集中提出候选差异描述,然后通过检查它们能够多好地区分这两组来重新排列这些候选项。我们引入了 VisDiff,它首先为图像生成字幕,然后促使语言模型提出候选描述,最后使用 CLIP 重新排列这些描述。为了评估 VisDiff,我们收集了 VisDiffBench 数据集,其中包含 187 对图像集合和地面真实差异描述。我们将 VisDiff 应用于各种领域,例如比较数据集(例如 ImageNet 与 ImageNetV2)、比较分类模型(例如零样本 CLIP 与监督式 ResNet)、总结模型失败模式(监督式 ResNet)、表征生成模型之间的差异(例如 StableDiffusionV1 和 V2),以及发现使图像令人难忘的因素。利用 VisDiff,我们能够发现数据集和模型中有趣且以前未知的差异,展示了其在揭示微妙见解方面的实用性。
English
How do two sets of images differ? Discerning set-level differences is crucial for understanding model behaviors and analyzing datasets, yet manually sifting through thousands of images is impractical. To aid in this discovery process, we explore the task of automatically describing the differences between two sets of images, which we term Set Difference Captioning. This task takes in image sets D_A and D_B, and outputs a description that is more often true on D_A than D_B. We outline a two-stage approach that first proposes candidate difference descriptions from image sets and then re-ranks the candidates by checking how well they can differentiate the two sets. We introduce VisDiff, which first captions the images and prompts a language model to propose candidate descriptions, then re-ranks these descriptions using CLIP. To evaluate VisDiff, we collect VisDiffBench, a dataset with 187 paired image sets with ground truth difference descriptions. We apply VisDiff to various domains, such as comparing datasets (e.g., ImageNet vs. ImageNetV2), comparing classification models (e.g., zero-shot CLIP vs. supervised ResNet), summarizing model failure modes (supervised ResNet), characterizing differences between generative models (e.g., StableDiffusionV1 and V2), and discovering what makes images memorable. Using VisDiff, we are able to find interesting and previously unknown differences in datasets and models, demonstrating its utility in revealing nuanced insights.
PDF160December 15, 2024