NL-Eye:图像的诱导式自然语言推理
NL-Eye: Abductive NLI for Images
October 3, 2024
作者: Mor Ventura, Michael Toker, Nitay Calderon, Zorik Gekhman, Yonatan Bitton, Roi Reichart
cs.AI
摘要
基于视觉语言模型(VLM)的机器人是否会在检测到湿滑地面时警告我们?最近的VLM展示了令人印象深刻的能力,然而它们推断结果和原因的能力仍未得到充分探讨。为了解决这个问题,我们引入了NL-Eye,一个旨在评估VLM视觉推理能力的基准。NL-Eye将归纳推理(NLI)任务调整到视觉领域,要求模型根据前提图像评估假设图像的合理性并解释他们的决策。NL-Eye包含了350个精心策划的三元组示例(1,050张图像),涵盖了多种推理类别:物理、功能、逻辑、情感、文化和社会。数据筛选过程包括两个步骤 - 撰写文本描述和使用文本到图像模型生成图像,这两者都需要大量人工参与以确保高质量和具有挑战性的场景。我们的实验表明,VLM在NL-Eye上表现困难,通常表现为随机基准水平,而人类在合理性预测和解释质量方面表现出色。这表明了现代VLM的归纳推理能力存在不足。NL-Eye代表了向开发能够进行强大多模态推理的VLM迈出的关键一步,这些推理可用于真实世界应用,包括预防事故的机器人和生成视频验证。
English
Will a Visual Language Model (VLM)-based bot warn us about slipping if it
detects a wet floor? Recent VLMs have demonstrated impressive capabilities, yet
their ability to infer outcomes and causes remains underexplored. To address
this, we introduce NL-Eye, a benchmark designed to assess VLMs' visual
abductive reasoning skills. NL-Eye adapts the abductive Natural Language
Inference (NLI) task to the visual domain, requiring models to evaluate the
plausibility of hypothesis images based on a premise image and explain their
decisions. NL-Eye consists of 350 carefully curated triplet examples (1,050
images) spanning diverse reasoning categories: physical, functional, logical,
emotional, cultural, and social. The data curation process involved two steps -
writing textual descriptions and generating images using text-to-image models,
both requiring substantial human involvement to ensure high-quality and
challenging scenes. Our experiments show that VLMs struggle significantly on
NL-Eye, often performing at random baseline levels, while humans excel in both
plausibility prediction and explanation quality. This demonstrates a deficiency
in the abductive reasoning capabilities of modern VLMs. NL-Eye represents a
crucial step toward developing VLMs capable of robust multimodal reasoning for
real-world applications, including accident-prevention bots and generated video
verification.Summary
AI-Generated Summary