ChatPaper.aiChatPaper

MCP-Universe:基于真实世界模型上下文协议服务器的大型语言模型基准测试

MCP-Universe: Benchmarking Large Language Models with Real-World Model Context Protocol Servers

August 20, 2025
作者: Ziyang Luo, Zhiqi Shen, Wenzhuo Yang, Zirui Zhao, Prathyusha Jwalapuram, Amrita Saha, Doyen Sahoo, Silvio Savarese, Caiming Xiong, Junnan Li
cs.AI

摘要

模型上下文协议(Model Context Protocol,MCP)已成为连接大型语言模型与外部数据源及工具的革命性标准,迅速在各大AI供应商和开发平台中普及。然而,现有基准测试过于简化,未能捕捉实际应用中的挑战,如长程推理和庞大且陌生的工具空间。为填补这一关键空白,我们推出了MCP-Universe,这是首个专门设计用于通过与现实世界MCP服务器交互来评估LLM在真实且复杂任务中表现的全面基准。我们的基准涵盖6个核心领域,涉及11个不同的MCP服务器:位置导航、仓库管理、金融分析、3D设计、浏览器自动化及网络搜索。为确保评估的严谨性,我们实施了基于执行的评估器,包括用于代理格式合规性的格式评估器、用于时间不变内容匹配的静态评估器,以及为时间敏感任务自动检索实时真实数据的动态评估器。通过对领先LLM的广泛评估,我们发现即使是GPT-5(43.72%)、Grok-4(33.33%)和Claude-4.0-Sonnet(29.44%)等SOTA模型也表现出显著的性能局限。此外,我们的基准对LLM代理提出了显著的长上下文挑战,因为输入令牌数量随交互步骤迅速增加。同时,它还引入了未知工具挑战,因为LLM代理通常不熟悉MCP服务器的精确使用。值得注意的是,企业级代理如Cursor无法超越标准ReAct框架的表现。除评估外,我们还开源了带有UI支持的可扩展评估框架,使研究人员和从业者能够无缝集成新代理和MCP服务器,同时促进快速发展的MCP生态系统中的创新。
English
The Model Context Protocol has emerged as a transformative standard for connecting large language models to external data sources and tools, rapidly gaining adoption across major AI providers and development platforms. However, existing benchmarks are overly simplistic and fail to capture real application challenges such as long-horizon reasoning and large, unfamiliar tool spaces. To address this critical gap, we introduce MCP-Universe, the first comprehensive benchmark specifically designed to evaluate LLMs in realistic and hard tasks through interaction with real-world MCP servers. Our benchmark encompasses 6 core domains spanning 11 different MCP servers: Location Navigation, Repository Management, Financial Analysis, 3D Design, Browser Automation, and Web Searching. To ensure rigorous evaluation, we implement execution-based evaluators, including format evaluators for agent format compliance, static evaluators for time-invariant content matching, and dynamic evaluators that automatically retrieve real-time ground truth for temporally sensitive tasks. Through extensive evaluation of leading LLMs, we find that even SOTA models such as GPT-5 (43.72%), Grok-4 (33.33%) and Claude-4.0-Sonnet (29.44%) exhibit significant performance limitations. In addition, our benchmark poses a significant long-context challenge for LLM agents, as the number of input tokens increases rapidly with the number of interaction steps. Moreover, it introduces an unknown-tools challenge, as LLM agents often lack familiarity with the precise usage of the MCP servers. Notably, enterprise-level agents like Cursor cannot achieve better performance than standard ReAct frameworks. Beyond evaluation, we open-source our extensible evaluation framework with UI support, enabling researchers and practitioners to seamlessly integrate new agents and MCP servers while fostering innovation in the rapidly evolving MCP ecosystem.
PDF337August 21, 2025