AgentLongBench:通过环境推演实现可控长上下文智能体基准测试
AgentLongBench: A Controllable Long Benchmark For Long-Contexts Agents via Environment Rollouts
January 28, 2026
作者: Shicheng Fang, Yuxin Wang, XiaoRan Liu, Jiahao Lu, Chuanyuan Tan, Xinchi Chen, Yining Zheng. Xuanjing Huang, Xipeng Qiu
cs.AI
摘要
大型语言模型(LLMs)向自主智能体的演进需要管理海量动态上下文。然而现有基准测试大多保持静态,依赖被动检索任务,无法模拟智能体与环境交互的复杂性(如非线性推理与迭代反馈)。为此,我们提出AgentLongBench评估框架,通过基于横向思维谜题的模拟环境推演来评估智能体性能。该框架在知识密集型与知识无关场景中生成严密的交互轨迹。针对先进模型与记忆系统(32K至400万词元)的实验揭示关键缺陷:尽管智能体擅长静态检索,却在动态信息整合方面表现不佳——而这正是工作流的核心需求。分析表明,性能退化源于解决查询所需的最小词元量。这一因素解释了为何海量工具响应中固有的高信息密度,比长轮对话中常见的内存碎片化现象构成更严峻的挑战。
English
The evolution of Large Language Models (LLMs) into autonomous agents necessitates the management of extensive, dynamic contexts. Current benchmarks, however, remain largely static, relying on passive retrieval tasks that fail to simulate the complexities of agent-environment interaction, such as non-linear reasoning and iterative feedback. To address this, we introduce AgentLongBench, which evaluates agents through simulated environment rollouts based on Lateral Thinking Puzzles. This framework generates rigorous interaction trajectories across knowledge-intensive and knowledge-free scenarios. Experiments with state-of-the-art models and memory systems (32K to 4M tokens) expose a critical weakness: while adept at static retrieval, agents struggle with the dynamic information synthesis essential for workflows. Our analysis indicates that this degradation is driven by the minimum number of tokens required to resolve a query. This factor explains why the high information density inherent in massive tool responses poses a significantly greater challenge than the memory fragmentation typical of long-turn dialogues.