ChatPaper.aiChatPaper

LoopTool:為實現強大LLM工具調用閉合數據訓練迴路

LoopTool: Closing the Data-Training Loop for Robust LLM Tool Calls

November 12, 2025
作者: Kangning Zhang, Wenxiang Jiao, Kounianhua Du, Yuan Lu, Weiwen Liu, Weinan Zhang, Lei Zhang, Yong Yu
cs.AI

摘要

透過外部工具增強大型語言模型(LLMs)使其能夠執行複雜的多步驟任務。然而,工具學習目前受制於靜態合成數據流程——數據生成與模型訓練被分割為兩個獨立且無交互的過程。這種方法無法針對模型特定弱點進行自適應聚焦,且允許噪聲標籤持續存在,從而降低訓練效率。我們提出 LoopTool,一個全自動、模型感知的數據演化框架,通過緊密整合數據合成與模型訓練來閉合此循環。LoopTool 透過三個協同模組迭代優化數據與模型:(1) 貪婪能力探測(GCP)診斷模型已掌握與失敗的能力;(2) 判斷引導標籤驗證(JGLV)使用開源評判模型發現並修正標註錯誤,逐步淨化數據集;(3) 錯誤驅動數據擴充(EDDE)基於已識別的失敗案例生成具挑戰性的新樣本。此閉環流程運行於成本效益高的開源生態中,無需依賴昂貴的閉源 API。實驗表明,經 LoopTool 訓練的 8B 模型顯著超越其 32B 數據生成器,並在其規模對應的 BFCL-v3 與 ACEBench 基準測試中取得最新頂尖成果。我們的研究證明,閉環式自我優化的數據流程能大幅提升 LLMs 的工具使用能力。
English
Augmenting Large Language Models (LLMs) with external tools enables them to execute complex, multi-step tasks. However, tool learning is hampered by the static synthetic data pipelines where data generation and model training are executed as two separate, non-interactive processes. This approach fails to adaptively focus on a model's specific weaknesses and allows noisy labels to persist, degrading training efficiency. We introduce LoopTool, a fully automated, model-aware data evolution framework that closes this loop by tightly integrating data synthesis and model training. LoopTool iteratively refines both the data and the model through three synergistic modules: (1) Greedy Capability Probing (GCP) diagnoses the model's mastered and failed capabilities; (2) Judgement-Guided Label Verification (JGLV) uses an open-source judge model to find and correct annotation errors, progressively purifying the dataset; and (3) Error-Driven Data Expansion (EDDE) generates new, challenging samples based on identified failures. This closed-loop process operates within a cost-effective, open-source ecosystem, eliminating dependence on expensive closed-source APIs. Experiments show that our 8B model trained with LoopTool significantly surpasses its 32B data generator and achieves new state-of-the-art results on the BFCL-v3 and ACEBench benchmarks for its scale. Our work demonstrates that closed-loop, self-refining data pipelines can dramatically enhance the tool-use capabilities of LLMs.
PDF162December 1, 2025