ChatPaper.aiChatPaper

多模态大型语言模型的长文本能力基准测试

Multimodal Needle in a Haystack: Benchmarking Long-Context Capability of Multimodal Large Language Models

June 17, 2024
作者: Hengyi Wang, Haizhou Shi, Shiwei Tan, Weiyi Qin, Wenyuan Wang, Tunyu Zhang, Akshay Nambi, Tanuja Ganu, Hao Wang
cs.AI

摘要

多模态大型语言模型(MLLMs)在各种应用中展现出显著的潜力,引起了研究人员和从业者的广泛兴趣。然而,对它们长上下文能力的全面评估仍未得到充分探讨。为了填补这些空白,我们引入了MultiModal Needle-in-a-haystack(MMNeedle)基准,专门设计用于评估MLLMs的长上下文能力。除了多图像输入外,我们采用图像拼接来进一步增加输入上下文长度,并制定了一个协议,用于自动生成子图像级别的检索标签。基本上,MMNeedle通过对MLLMs进行压力测试,评估它们在基于文本指令和图像内容描述的情况下,定位一组图像(干草垛)中的目标子图像(针)的能力。这种设置要求对广泛的视觉上下文有高级理解,并能够在长上下文图像输入中进行有效信息检索。通过这一基准,我们评估了最先进的MLLMs,包括基于API和开源模型。研究结果显示,GPT-4o在长上下文场景中持续超越其他模型,但在负样本中存在幻觉问题,即当针不在干草垛中时。我们对MLLMs的全面长上下文评估还揭示了基于API和开源模型之间的显著性能差距。重现主要结果所需的所有代码、数据和说明均可在https://github.com/Wang-ML-Lab/multimodal-needle-in-a-haystack 上找到。
English
Multimodal Large Language Models (MLLMs) have shown significant promise in various applications, leading to broad interest from researchers and practitioners alike. However, a comprehensive evaluation of their long-context capabilities remains underexplored. To address these gaps, we introduce the MultiModal Needle-in-a-haystack (MMNeedle) benchmark, specifically designed to assess the long-context capabilities of MLLMs. Besides multi-image input, we employ image stitching to further increase the input context length, and develop a protocol to automatically generate labels for sub-image level retrieval. Essentially, MMNeedle evaluates MLLMs by stress-testing their capability to locate a target sub-image (needle) within a set of images (haystack) based on textual instructions and descriptions of image contents. This setup necessitates an advanced understanding of extensive visual contexts and effective information retrieval within long-context image inputs. With this benchmark, we evaluate state-of-the-art MLLMs, encompassing both API-based and open-source models. The findings reveal that GPT-4o consistently surpasses other models in long-context scenarios, but suffers from hallucination problems in negative samples, i.e., when needles are not in the haystacks. Our comprehensive long-context evaluation of MLLMs also sheds lights on the considerable performance gap between API-based and open-source models. All the code, data, and instructions required to reproduce the main results are available at https://github.com/Wang-ML-Lab/multimodal-needle-in-a-haystack.

Summary

AI-Generated Summary

PDF351December 3, 2024