FinMCP-Bench:基于模型上下文协议的金融工具實戰應用LLM智能體基準測試框架
FinMCP-Bench: Benchmarking LLM Agents for Real-World Financial Tool Use under the Model Context Protocol
March 26, 2026
作者: Jie Zhu, Yimin Tian, Boyang Li, Kehao Wu, Zhongzhi Liang, Junhui Li, Xianyin Zhang, Lifan Guo, Feng Chen, Yong Liu, Chi Zhang
cs.AI
摘要
本文介绍FinMCP-Bench——一个通过金融模型上下文协议工具调用来评估大语言模型解决现实金融问题能力的新型基准。该基准包含涵盖10个主场景和33个子场景的613个样本,融合真实与合成用户查询以确保多样性与真实性。基准整合了65个真实金融MCP协议及单工具、多工具和多轮对话三类样本,支持对不同复杂度任务下的模型进行评估。基于此基准,我们系统评估了多种主流大语言模型,并提出可精确衡量工具调用准确率与推理能力的指标。FinMCP-Bench为推进金融领域大模型智能体的研究提供了标准化、实用化且具有挑战性的测试平台。
English
This paper introduces FinMCP-Bench, a novel benchmark for evaluating large language models (LLMs) in solving real-world financial problems through tool invocation of financial model context protocols. FinMCP-Bench contains 613 samples spanning 10 main scenarios and 33 sub-scenarios, featuring both real and synthetic user queries to ensure diversity and authenticity. It incorporates 65 real financial MCPs and three types of samples, single tool, multi-tool, and multi-turn, allowing evaluation of models across different levels of task complexity. Using this benchmark, we systematically assess a range of mainstream LLMs and propose metrics that explicitly measure tool invocation accuracy and reasoning capabilities. FinMCP-Bench provides a standardized, practical, and challenging testbed for advancing research on financial LLM agents.