超越准确率:揭示工具集成推理中的低效模式
Beyond Accuracy: Unveiling Inefficiency Patterns in Tool-Integrated Reasoning
April 7, 2026
作者: Qisheng Su, Shiting Huang, Zhen Fang, Ziyan Chen, Zehui Chen, Feng Zhao
cs.AI
摘要
在实际工具集成推理(TIR)场景中,大型语言模型需交替进行推理与外部工具调用,其效率低下的主要根源在于工具调用会导致LLM请求间出现停顿并引发KV缓存清空,从而迫使模型重新计算。此外,外部工具返回的冗长未过滤响应会膨胀KV缓存,使得每个解码步骤需花费更多时间加载不断增长的缓存,从而随着上下文长度增加持续减速。然而,现有效率指标(如令牌计数和工具调用次数)均无法真实反映模型推理延迟。为此,我们提出PTE(预填充令牌当量)——一种硬件感知的TIR效率度量标准,该指标通过显式考量不可复用KV缓存和长工具响应场景,将内部推理与外部工具使用成本统一量化。在高并发工业环境下的验证表明,PTE与实时延迟的吻合度显著优于标准令牌计数,且能在不同硬件配置下保持一致的效率排序。我们基于五大TIR基准开展大量实验,量化其PTE成本,并识别出TIR中存在的四类低效模式。同时发现PTE成本越高的推理轨迹往往正确率越低,这表明单纯增加工具使用量并不能提升答案质量。
English
In real-world Tool-Integrated Reasoning (TIR) scenarios, where LLMs interleave reasoning with external tool calls, a major source of inefficiency is that the toolcalls create pauses between LLM requests and cause KV-Cache eviction, forcing recomputation. Also, the long, unfiltered response returned by external tools inflates the KV-Cache, so each decode step spends more time loading the growing cache and thus becomes steadily slower as context length increases. However, existing efficiency metrics like token counts and toolcall counts fail to capture the real model inference latency. To address this, we introduce PTE (Prefill Token Equivalents), a hardware-aware TIR-efficiency metric that unifies internal reasoning and external tool-use costs while explicitly accounting for non-reusable KV-Cache and long-tool-response scenarios. Validation in a high-concurrency industrial setting indicates that PTE aligns significantly better with wall-clock latency than standard token counts, while maintaining consistent efficiency rankings across diverse hardware profiles. We conduct extensive experiments across five TIR benchmarks, quantify their PTE costs, and identify four inefficiency patterns that appear in TIR. We also discover that trajectories with higher PTE costs tend to have lower reasoning correctness, indicating that simply using more tools does not improve the quality of the answer.