ChatPaper.aiChatPaper

AI智能体能回答你的数据问题吗?数据代理基准测试研究

Can AI Agents Answer Your Data Questions? A Benchmark for Data Agents

March 21, 2026
作者: Ruiying Ma, Shreya Shankar, Ruiqi Chen, Yiming Lin, Sepanta Zeighami, Rajoshi Ghosh, Abhinav Gupta, Anushrut Gupta, Tanmai Gopal, Aditya G. Parameswaran
cs.AI

摘要

随着企业用户日益依赖AI代理通过自然语言查询数据,构建可靠的数据代理仍面临挑战。现实世界的数据往往分散在多个异构数据库系统中,存在引用不一致问题,且信息埋藏于非结构化文本中。现有基准仅解决局部问题——如将自然语言问题转换为SQL查询、基于上下文提供的小型表格回答问题——但未能评估跨多数据库系统进行数据整合、转换与分析的全流程。为填补这一空白,我们基于对六大行业企业数据代理工作负载的形态研究,提出了数据代理基准(DAB)。该基准涵盖12个数据集、9个领域、4种数据库管理系统中的54个查询任务。在DAB测试中,性能最优的前沿模型(Gemini-3-Pro)仅达到38%的pass@1准确率。我们对五款前沿大语言模型进行基准测试,分析其失败模式,并提炼出未来数据代理开发的要点。基准框架与实验代码已发布于github.com/ucbepic/DataAgentBench。
English
Users across enterprises increasingly rely on AI agents to query their data through natural language. However, building reliable data agents remains difficult because real-world data is often fragmented across multiple heterogeneous database systems, with inconsistent references and information buried in unstructured text. Existing benchmarks only tackle individual pieces of this problem -- e.g., translating natural-language questions into SQL queries, answering questions over small tables provided in context -- but do not evaluate the full pipeline of integrating, transforming, and analyzing data across multiple database systems. To fill this gap, we present the Data Agent Benchmark (DAB), grounded in a formative study of enterprise data agent workloads across six industries. DAB comprises 54 queries across 12 datasets, 9 domains, and 4 database management systems. On DAB, the best frontier model (Gemini-3-Pro) achieves only 38% pass@1 accuracy. We benchmark five frontier LLMs, analyze their failure modes, and distill takeaways for future data agent development. Our benchmark and experiment code are published at github.com/ucbepic/DataAgentBench.
PDF01March 26, 2026