ChatPaper.aiChatPaper

不仅微调智能体,更要优化环境

Don't Just Fine-tune the Agent, Tune the Environment

October 11, 2025
作者: Siyuan Lu, Zechuan Wang, Hongxuan Zhang, Qintong Wu, Leilei Gan, Chenyi Zhuang, Jinjie Gu, Tao Lin
cs.AI

摘要

大型语言模型(LLM)代理在复杂多轮工具使用任务中展现出巨大潜力,但其发展常受限于高质量训练数据的极度匮乏。基于合成数据的监督微调(SFT)易导致过拟合,而标准的强化学习(RL)则面临关键的冷启动问题和训练不稳定性。为应对这些挑战,我们引入了环境调优(Environment Tuning),这是一种新颖的训练范式,使代理能够直接从问题实例中学习复杂行为,而无需依赖预先收集的专家轨迹。环境调优通过结构化课程、提供纠正反馈的可操作环境增强以及细粒度进度奖励来协调这一学习过程,确保稳定且高效的探索。仅使用伯克利函数调用排行榜(BFCL)基准中的400个问题实例,我们的方法不仅在与强基线的分布内性能对比中表现出竞争力,还展示了卓越的分布外泛化能力,克服了基于SFT方法常见的性能崩溃问题。我们的工作标志着从静态轨迹的监督微调向基于环境的动态探索的范式转变,为训练更鲁棒且数据高效的代理开辟了新途径。
English
Large Language Model (LLM) agents show great promise for complex, multi-turn tool-use tasks, but their development is often hampered by the extreme scarcity of high-quality training data. Supervised fine-tuning (SFT) on synthetic data leads to overfitting, whereas standard reinforcement learning (RL) struggles with a critical cold-start problem and training instability. To address these challenges, we introduce Environment Tuning, a novel training paradigm that enables agents to learn complex behaviors directly from problem instances without relying on pre-collected expert trajectories. Environment Tuning orchestrates this learning process through a structured curriculum, actionable environment augmentation that provides corrective feedback, and fine-grained progress rewards to ensure stable and efficient exploration. Using only 400 problem instances from Berkeley Function-Calling Leaderboard (BFCL) benchmark, our method not only achieves competitive in-distribution performance against strong baselines but also demonstrates superior out-of-distribution generalization, overcoming the performance collapse common to SFT-based approaches. Our work presents a paradigm shift from supervised fine-tuning on static trajectories to dynamic, environment-based exploration, paving the way for training more robust and data-efficient agents.
PDF273October 14, 2025