主动探索而非被动等待:大型语言模型深度数据研究评估
Hunt Instead of Wait: Evaluating Deep Data Research on Large Language Models
February 2, 2026
作者: Wei Liu, Peijie Yu, Michele Orini, Yali Du, Yulan He
cs.AI
摘要
智能体化大型语言模型所要求的能动性,不仅在于正确回答问题,更需要具备自主设定目标与决策探索方向的自主性。我们将这种能力称为"探查型智能",以区别于仅能完成指定任务的"执行型智能"。数据科学领域为此提供了天然试验场——现实世界的数据分析始于原始数据而非明确查询,但现有基准测试鲜少关注这一特性。为此,我们提出开放式的深度数据研究任务,让大型语言模型自主从数据库中提取关键洞见,并建立基于核查表的大规模基准测试平台DDR-Bench以实现可验证评估。实验结果表明,尽管前沿模型展现出初步的能动性,但长周期探索仍具挑战。我们的分析强调,有效的探查型智能不仅依赖于智能体框架构建或单纯规模扩展,更取决于智能体模型的内在策略机制。
English
The agency expected of Agentic Large Language Models goes beyond answering correctly, requiring autonomy to set goals and decide what to explore. We term this investigatory intelligence, distinguishing it from executional intelligence, which merely completes assigned tasks. Data Science provides a natural testbed, as real-world analysis starts from raw data rather than explicit queries, yet few benchmarks focus on it. To address this, we introduce Deep Data Research (DDR), an open-ended task where LLMs autonomously extract key insights from databases, and DDR-Bench, a large-scale, checklist-based benchmark that enables verifiable evaluation. Results show that while frontier models display emerging agency, long-horizon exploration remains challenging. Our analysis highlights that effective investigatory intelligence depends not only on agent scaffolding or merely scaling, but also on intrinsic strategies of agentic models.