FinMCP基准测试:基于模型上下文协议评估LLM智能体在真实金融工具应用中的表现
FinMCP-Bench: Benchmarking LLM Agents for Real-World Financial Tool Use under the Model Context Protocol
March 26, 2026
作者: Jie Zhu, Yimin Tian, Boyang Li, Kehao Wu, Zhongzhi Liang, Junhui Li, Xianyin Zhang, Lifan Guo, Feng Chen, Yong Liu, Chi Zhang
cs.AI
摘要
本文介绍FinMCP-Bench——一个通过金融模型上下文协议工具调用来评估大语言模型解决现实金融问题能力的新型基准。该基准包含613个样本,覆盖10个主场景和33个子场景,融合真实与合成用户查询以确保多样性和真实性。基准集成65个真实金融MCP协议,包含单工具、多工具和多轮对话三类样本,可评估模型在不同任务复杂度下的表现。基于此基准,我们系统评估了主流大语言模型,并提出专门衡量工具调用准确性与推理能力的指标。FinMCP-Bench为推进金融领域LLM智能体研究提供了标准化、实用化且具有挑战性的测试平台。
English
This paper introduces FinMCP-Bench, a novel benchmark for evaluating large language models (LLMs) in solving real-world financial problems through tool invocation of financial model context protocols. FinMCP-Bench contains 613 samples spanning 10 main scenarios and 33 sub-scenarios, featuring both real and synthetic user queries to ensure diversity and authenticity. It incorporates 65 real financial MCPs and three types of samples, single tool, multi-tool, and multi-turn, allowing evaluation of models across different levels of task complexity. Using this benchmark, we systematically assess a range of mainstream LLMs and propose metrics that explicitly measure tool invocation accuracy and reasoning capabilities. FinMCP-Bench provides a standardized, practical, and challenging testbed for advancing research on financial LLM agents.