JourneyDB:生成图像理解基准
JourneyDB: A Benchmark for Generative Image Understanding
July 3, 2023
作者: Junting Pan, Keqiang Sun, Yuying Ge, Hao Li, Haodong Duan, Xiaoshi Wu, Renrui Zhang, Aojun Zhou, Zipeng Qin, Yi Wang, Jifeng Dai, Yu Qiao, Hongsheng Li
cs.AI
摘要
近年来,视觉-语言模型的进展彻底改变了多模态理解,但它们是否具备理解生成图像的能力仍不清楚。与真实数据相比,合成图像在内容和风格上表现出更高程度的多样性,这给模型完全理解带来了重大困难。为此,我们提出了一个大规模数据集 JourneyDB,用于生成图像的多模态视觉理解。我们精心策划的数据集包含了 400 万个多样且高质量的生成图像,以及用于生成它们的文本提示。我们进一步设计了 4 个基准来量化生成图像理解的性能,包括内容和风格解释。这些基准包括提示反演、风格检索、图像字幕和视觉问答。最后,我们评估了当前最先进的多模态模型在应用于 JourneyDB 时的性能,并对它们在生成内容理解方面的优势和局限性进行了深入分析。我们希望提出的数据集和基准能促进生成内容理解领域的研究。该数据集将在 https://journeydb.github.io 上提供。
English
While recent advancements in vision-language models have revolutionized
multi-modal understanding, it remains unclear whether they possess the
capabilities of comprehending the generated images. Compared to real data,
synthetic images exhibit a higher degree of diversity in both content and
style, for which there are significant difficulties for the models to fully
apprehend. To this end, we present a large-scale dataset, JourneyDB, for
multi-modal visual understanding in generative images. Our curated dataset
covers 4 million diverse and high-quality generated images paired with the text
prompts used to produce them. We further design 4 benchmarks to quantify the
performance of generated image understanding in terms of both content and style
interpretation. These benchmarks include prompt inversion, style retrieval,
image captioning and visual question answering. Lastly, we assess the
performance of current state-of-the-art multi-modal models when applied to
JourneyDB, and provide an in-depth analysis of their strengths and limitations
in generated content understanding. We hope the proposed dataset and benchmarks
will facilitate the research in the field of generative content understanding.
The dataset will be available on https://journeydb.github.io.