OfficeQA Pro:面向端到端接地推理的企业级基准测试
OfficeQA Pro: An Enterprise Benchmark for End-to-End Grounded Reasoning
March 9, 2026
作者: Krista Opsahl-Ong, Arnav Singhvi, Jasmine Collins, Ivan Zhou, Cindy Wang, Ashutosh Baheti, Owen Oertell, Jacob Portes, Sam Havens, Erich Elsen, Michael Bendersky, Matei Zaharia, Xing Chen
cs.AI
摘要
我们推出OfficeQA Pro基准测试,用于评估AI代理在大型异构文档库上进行具身多文档推理的能力。该文档库包含横跨近100年的美国财政部公报,共计8.9万页、超过2600万个数值。OfficeQA Pro包含133个问题,要求对非结构化文本和表格数据进行精确的文档解析、检索和分析推理。前沿大语言模型(包括Claude Opus 4.6、GPT-5.4和Gemini 3.1 Pro Preview)在仅依赖参数化知识时,于OfficeQA Pro上的准确率不足5%,即使增加网络检索权限后准确率仍低于12%。当直接提供完整文档库时,前沿AI代理仍有过半问题无法解决,平均得分仅为34.1%。研究发现,通过Databricks的ai_parse_document生成结构化文档表示,可使各类代理平均相对性能提升16.1%。我们还进行了消融实验以研究模型选择、表格表示、检索策略和测试时扩展对性能的影响。尽管存在这些改进,但要使AI代理在企业级具身推理任务中达到可靠水平,仍存在显著的提升空间。
English
We introduce OfficeQA Pro, a benchmark for evaluating AI agents on grounded, multi-document reasoning over a large and heterogeneous document corpus. The corpus consists of U.S. Treasury Bulletins spanning nearly 100 years, comprising 89,000 pages and over 26 million numerical values. OfficeQA Pro consists of 133 questions that require precise document parsing, retrieval, and analytical reasoning across both unstructured text and tabular data. Frontier LLMs including Claude Opus 4.6, GPT-5.4, and Gemini 3.1 Pro Preview achieve less than 5% accuracy on OfficeQA Pro when relying on parametric knowledge, and less than 12% with additional access to the web. When provided directly with the document corpus, frontier agents still struggle on over half of questions, scoring 34.1% on average. We find that providing agents with a structured document representation produced by Databricks' ai_parse_document yields a 16.1% average relative performance gain across agents. We conduct additional ablations to study the effects of model selection, table representation, retrieval strategy, and test-time scaling on performance. Despite these improvements, significant headroom remains before agents can be considered reliable at enterprise-grade grounded reasoning.