ChatPaper.aiChatPaper

CCTU:复杂约束条件下工具使用能力基准测试框架

CCTU: A Benchmark for Tool Use under Complex Constraints

March 16, 2026
作者: Junjie Ye, Guoqiang Zhang, Wenjie Fu, Tao Gui, Qi Zhang, Xuanjing Huang
cs.AI

摘要

在明确约束条件下通过工具使用解决问题,对大型语言模型(LLMs)而言是极具挑战性却又不可避免的场景,这需要模型具备函数调用、指令遵循和自我修正等能力。然而,由于缺乏专项评估体系,相关进展一直受阻。为此,我们推出CCTU基准测试,用于评估复杂约束下的LLM工具使用能力。该基准基于涵盖资源、行为、工具集和响应四个维度的12类约束条件构建,包含200个经过精心设计的跨领域工具使用场景测试案例,每个案例平均涉及七类约束类型,提示词平均长度超过4700个词元。为实现可靠评估,我们开发了可执行的约束验证模块,能在模型与环境的多轮交互过程中执行步骤级验证并确保约束合规性。我们在思考模式与非思考模式下评估了九款前沿LLMs,结果显示:当要求严格遵守所有约束时,所有模型的任务完成率均未超过20%。进一步分析表明,模型在超过50%的案例中违反约束,尤其在资源与响应维度。此外,即使获得详细的违规反馈,LLMs仍表现出有限的自我修正能力,这凸显了开发鲁棒性工具使用代理的关键瓶颈。为促进后续研究,我们公开了数据集与代码。
English
Solving problems through tool use under explicit constraints constitutes a highly challenging yet unavoidable scenario for large language models (LLMs), requiring capabilities such as function calling, instruction following, and self-refinement. However, progress has been hindered by the absence of dedicated evaluations. To address this, we introduce CCTU, a benchmark for evaluating LLM tool use under complex constraints. CCTU is grounded in a taxonomy of 12 constraint categories spanning four dimensions (i.e., resource, behavior, toolset, and response). The benchmark comprises 200 carefully curated and challenging test cases across diverse tool-use scenarios, each involving an average of seven constraint types and an average prompt length exceeding 4,700 tokens. To enable reliable evaluation, we develop an executable constraint validation module that performs step-level validation and enforces compliance during multi-turn interactions between models and their environments. We evaluate nine state-of-the-art LLMs in both thinking and non-thinking modes. Results indicate that when strict adherence to all constraints is required, no model achieves a task completion rate above 20%. Further analysis reveals that models violate constraints in over 50% of cases, particularly in the resource and response dimensions. Moreover, LLMs demonstrate limited capacity for self-refinement even after receiving detailed feedback on constraint violations, highlighting a critical bottleneck in the development of robust tool-use agents. To facilitate future research, we release the data and code.
PDF12March 19, 2026