ChatPaper.aiChatPaper

主动出击而非守株待兔:大语言模型深度数据研究评估

Hunt Instead of Wait: Evaluating Deep Data Research on Large Language Models

February 2, 2026
作者: Wei Liu, Peijie Yu, Michele Orini, Yali Du, Yulan He
cs.AI

摘要

智能体化大语言模型所要求的能动性不仅限于正确应答,更需要具备自主设定目标与探索方向的自主性。我们将这种能力定义为探查型智能,以区别于仅能完成指派任务的执行型智能。数据科学领域天然适合验证此类能力,因为真实世界的数据分析始于原始数据而非明确问题,但现有基准测试鲜少关注这一维度。为此,我们提出深度数据研究任务——让大语言模型自主从数据库提取关键洞见,并构建了基于可验证检查表的大规模基准测试DDR-Bench。实验表明,尽管前沿模型已显现初步的能动性,但长周期自主探索仍具挑战。我们的分析强调,有效的探查型智能不仅依赖智能体框架或单纯规模扩张,更取决于模型内在的自主策略。
English
The agency expected of Agentic Large Language Models goes beyond answering correctly, requiring autonomy to set goals and decide what to explore. We term this investigatory intelligence, distinguishing it from executional intelligence, which merely completes assigned tasks. Data Science provides a natural testbed, as real-world analysis starts from raw data rather than explicit queries, yet few benchmarks focus on it. To address this, we introduce Deep Data Research (DDR), an open-ended task where LLMs autonomously extract key insights from databases, and DDR-Bench, a large-scale, checklist-based benchmark that enables verifiable evaluation. Results show that while frontier models display emerging agency, long-horizon exploration remains challenging. Our analysis highlights that effective investigatory intelligence depends not only on agent scaffolding or merely scaling, but also on intrinsic strategies of agentic models.
PDF52March 12, 2026