ChatPaper.aiChatPaper

**MERRIN:面向嘈杂网络环境的多模态证据检索与推理基准**

MERRIN: A Benchmark for Multimodal Evidence Retrieval and Reasoning in Noisy Web Environments

April 15, 2026
作者: Han Wang, David Wan, Hyunji Lee, Thinh Pham, Mikaela Cankosyan, Weiyuan Chen, Elias Stengel-Eskin, Tu Vu, Mohit Bansal
cs.AI

摘要

受搜索查询本身存在的模糊性、多跳性特征,以及现实网络环境中多模态、异质性且常含矛盾信息的启发,我们推出了MERRIN(嘈杂网络环境中的多模态证据检索与推理)基准数据集。该人工标注数据集旨在评估增强搜索智能体的性能,重点考察AI智能体在嘈杂网络环境中识别相关模态、检索多模态证据并进行多跳推理的能力。MERRIN在以下三方面区别于现有研究:(1)采用无明确模态提示的自然语言查询;(2)纳入视频、音频等尚未充分探索的模态;(3)要求在网络搜索过程中检索复杂且常含噪声或矛盾的多模态证据。我们评估了基于十种模型的多样化搜索智能体,包括强闭源模型(如GPT-5.4-mini、Gemini 3/3.1 Flash/Pro)和开源权重模型(Qwen3-4B/30B/235B),涵盖三种搜索场景(无搜索、原生搜索和智能体搜索)。实验结果表明MERRIN具有高度挑战性:所有智能体平均准确率仅为22.3%,最优模型仅达到40.1%。进一步观察发现,尽管Gemini Deep Research等强智能体表现更优,但由于过度探索导致提升有限——它们虽使用更多工具和执行更多步骤,却常被矛盾或部分相关的网络内容干扰而得出错误答案。与人类相比,这些智能体消耗更多资源却准确率更低,主要源于低效的信源选择和过度依赖文本模态。这些发现凸显了开发能够在嘈杂网络环境中进行跨模态稳健搜索与推理的智能体的必要性,使MERRIN成为评估此类能力的宝贵测试平台。
English
Motivated by the underspecified, multi-hop nature of search queries and the multimodal, heterogeneous, and often conflicting nature of real-world web results, we introduce MERRIN (Multimodal Evidence Retrieval and Reasoning in Noisy Web Environments), a human-annotated benchmark for evaluating search-augmented agents. MERRIN measures AI agents' ability to identify relevant modalities, retrieve multimodal evidence, and perform multi-hop reasoning over noisy web sources. It differs from prior work in three important aspects: (1) using natural language queries without explicit modality cues, (2) incorporating underexplored modalities such as video and audio, and (3) requiring the retrieval of complex, often noisy or conflicting multimodal evidence during web search. We evaluate diverse search agents powered by ten models, including strong closed-source models (e.g., GPT-5.4-mini, Gemini 3/3.1 Flash/Pro) and open-weight models (Qwen3-4B/30B/235B), across three search settings (no search, native search, and agentic search). Our results show that MERRIN is highly challenging: the average accuracy across all agents is 22.3%, with the best-performing agent reaching only 40.1%. We further observe that while stronger agents like Gemini Deep Research achieve higher performance, gains are modest due to over-exploration; they take more steps and use more tools, but are often distracted by conflicting or partially relevant web content, leading to incorrect answers. Compared to humans, these agents consume more resources yet achieve lower accuracy, largely due to inefficient source selection and an overreliance on text modalities. These findings highlight the need for search agents capable of robust search and reasoning across diverse modalities in noisy web environments, making MERRIN a valuable testbed for evaluating such capabilities.
PDF51April 17, 2026