ChatPaper.aiChatPaper

通过自动化构建环境实现大语言模型的反馈驱动工具使用优化

Feedback-Driven Tool-Use Improvements in Large Language Models via Automated Build Environments

August 12, 2025
作者: Junjie Ye, Changhao Jiang, Zhengyin Du, Yufei Xu, Xuesong Yao, Zhiheng Xi, Xiaoran Fan, Qi Zhang, Xuanjing Huang, Jiecao Chen
cs.AI

摘要

高效的工具使用对于大型语言模型(LLMs)与环境进行有意义的交互至关重要。然而,由于构建稳定的训练环境和设计可验证的奖励机制存在挑战,专门针对工具使用的强化学习(RL)框架的进展受到限制。为解决这一问题,我们提出了一种自动化的环境构建流程,该流程融合了场景分解、文档生成、功能集成、复杂度调节以及本地化部署。这一流程能够创建高质量的训练环境,提供详细且可量化的反馈,而无需依赖外部工具。此外,我们引入了一种可验证的奖励机制,该机制不仅评估工具使用的精确性,还考量任务执行的完整性。当与从构建环境中收集的轨迹数据相结合时,此机制能够无缝集成到标准的RL算法中,促进基于反馈的模型训练。在不同规模的LLMs上进行的实验表明,无论推理模式或训练算法如何,我们的方法均显著提升了模型使用工具的性能,且未削弱其通用能力。我们的分析指出,这些性能提升源于模型底层MLP参数的更新,从而推动了上下文理解与推理能力的增强。
English
Effective tool use is essential for large language models (LLMs) to interact meaningfully with their environment. However, progress is limited by the lack of efficient reinforcement learning (RL) frameworks specifically designed for tool use, due to challenges in constructing stable training environments and designing verifiable reward mechanisms. To address this, we propose an automated environment construction pipeline, incorporating scenario decomposition, document generation, function integration, complexity scaling, and localized deployment. This enables the creation of high-quality training environments that provide detailed and measurable feedback without relying on external tools. Additionally, we introduce a verifiable reward mechanism that evaluates both the precision of tool use and the completeness of task execution. When combined with trajectory data collected from the constructed environments, this mechanism integrates seamlessly with standard RL algorithms to facilitate feedback-driven model training. Experiments on LLMs of varying scales demonstrate that our approach significantly enhances the models' tool-use performance without degrading their general capabilities, regardless of inference modes or training algorithms. Our analysis suggests that these gains result from improved context understanding and reasoning, driven by updates to the lower-layer MLP parameters in models.
PDF132August 13, 2025