勿僅微調代理,更應調適環境
Don't Just Fine-tune the Agent, Tune the Environment
October 11, 2025
作者: Siyuan Lu, Zechuan Wang, Hongxuan Zhang, Qintong Wu, Leilei Gan, Chenyi Zhuang, Jinjie Gu, Tao Lin
cs.AI
摘要
大型語言模型(LLM)代理在處理複雜的多輪工具使用任務中展現出巨大潛力,但其發展常受制於高質量訓練數據的極度稀缺。基於合成數據的監督微調(SFT)容易導致過擬合,而標準的強化學習(RL)則面臨嚴重的冷啟動問題和訓練不穩定性。為應對這些挑戰,我們引入了環境調諧這一新穎的訓練範式,使代理能夠直接從問題實例中學習複雜行為,而無需依賴預先收集的專家軌跡。環境調諧通過結構化的課程安排、提供糾正性反饋的可操作環境增強,以及精細化的進度獎勵來協調這一學習過程,確保穩定且高效的探索。僅使用來自伯克利函數調用排行榜(BFCL)基準的400個問題實例,我們的方法不僅在分佈內性能上與強基線競爭,還展示了優異的分佈外泛化能力,克服了基於SFT方法常見的性能崩潰問題。我們的工作標誌著從基於靜態軌跡的監督微調向動態、基於環境的探索範式轉變,為訓練更為穩健且數據高效的代理鋪平了道路。
English
Large Language Model (LLM) agents show great promise for complex, multi-turn
tool-use tasks, but their development is often hampered by the extreme scarcity
of high-quality training data. Supervised fine-tuning (SFT) on synthetic data
leads to overfitting, whereas standard reinforcement learning (RL) struggles
with a critical cold-start problem and training instability. To address these
challenges, we introduce Environment Tuning, a novel training
paradigm that enables agents to learn complex behaviors directly from problem
instances without relying on pre-collected expert trajectories.
Environment Tuning orchestrates this learning process through a
structured curriculum, actionable environment augmentation that provides
corrective feedback, and fine-grained progress rewards to ensure stable and
efficient exploration. Using only 400 problem instances from Berkeley
Function-Calling Leaderboard (BFCL) benchmark, our method not only achieves
competitive in-distribution performance against strong baselines but also
demonstrates superior out-of-distribution generalization, overcoming the
performance collapse common to SFT-based approaches. Our work presents a
paradigm shift from supervised fine-tuning on static trajectories to dynamic,
environment-based exploration, paving the way for training more robust and
data-efficient agents.