ChatPaper.aiChatPaper

透過自動化建構環境實現大型語言模型基於反饋的工具使用改進

Feedback-Driven Tool-Use Improvements in Large Language Models via Automated Build Environments

August 12, 2025
作者: Junjie Ye, Changhao Jiang, Zhengyin Du, Yufei Xu, Xuesong Yao, Zhiheng Xi, Xiaoran Fan, Qi Zhang, Xuanjing Huang, Jiecao Chen
cs.AI

摘要

有效使用工具對於大型語言模型(LLMs)與其環境進行有意義的互動至關重要。然而,由於構建穩定訓練環境和設計可驗證獎勵機制的挑戰,專門針對工具使用的強化學習(RL)框架的缺乏限制了這一領域的進展。為解決這一問題,我們提出了一種自動化環境構建流程,該流程結合了場景分解、文檔生成、功能整合、複雜度調整以及本地化部署。這一流程能夠創建高質量的訓練環境,這些環境能夠提供詳細且可量化的反饋,而無需依賴外部工具。此外,我們引入了一種可驗證的獎勵機制,該機制評估工具使用的精確性和任務執行的完整性。當與從構建環境中收集的軌跡數據相結合時,這一機制能夠無縫集成到標準的RL算法中,以促進基於反饋的模型訓練。對不同規模的LLMs進行的實驗表明,無論推理模式或訓練算法如何,我們的方法均能顯著提升模型的工具使用性能,而不損害其一般能力。我們的分析表明,這些性能提升源於模型對上下文理解和推理能力的改善,這是由於模型底層MLP參數的更新所驅動的。
English
Effective tool use is essential for large language models (LLMs) to interact meaningfully with their environment. However, progress is limited by the lack of efficient reinforcement learning (RL) frameworks specifically designed for tool use, due to challenges in constructing stable training environments and designing verifiable reward mechanisms. To address this, we propose an automated environment construction pipeline, incorporating scenario decomposition, document generation, function integration, complexity scaling, and localized deployment. This enables the creation of high-quality training environments that provide detailed and measurable feedback without relying on external tools. Additionally, we introduce a verifiable reward mechanism that evaluates both the precision of tool use and the completeness of task execution. When combined with trajectory data collected from the constructed environments, this mechanism integrates seamlessly with standard RL algorithms to facilitate feedback-driven model training. Experiments on LLMs of varying scales demonstrate that our approach significantly enhances the models' tool-use performance without degrading their general capabilities, regardless of inference modes or training algorithms. Our analysis suggests that these gains result from improved context understanding and reasoning, driven by updates to the lower-layer MLP parameters in models.
PDF132August 13, 2025