ChatPaper.aiChatPaper

EnvScaler:通过程序化合成扩展面向LLM智能体的工具交互环境

EnvScaler: Scaling Tool-Interactive Environments for LLM Agent via Programmatic Synthesis

January 9, 2026
作者: Xiaoshuai Song, Haofei Chang, Guanting Dong, Yutao Zhu, Zhicheng Dou, Ji-Rong Wen
cs.AI

摘要

大型语言模型(LLMs)被期望训练成为各类现实环境中的智能体,但这一过程依赖于丰富多样的工具交互沙箱。然而,真实系统的访问往往受限;LLM模拟环境容易产生幻觉与不一致性;人工构建的沙箱则难以扩展。本文提出EnvScaler——一种通过程序化合成实现可扩展工具交互环境的自动化框架。该框架包含两个核心组件:首先,SkelBuilder通过主题挖掘、逻辑建模和质量评估构建多样化的环境骨架;随后,ScenGenerator为每个环境生成多任务场景及基于规则的轨迹验证函数。基于EnvScaler,我们合成了191个环境与约7,000个场景,并将其应用于Qwen3系列模型的监督微调(SFT)和强化学习(RL)训练。在三个基准测试上的结果表明,EnvScaler显著提升了LLMs在涉及多轮次、多工具交互的复杂环境中解决任务的能力。相关代码与数据已发布于https://github.com/RUC-NLPIR/EnvScaler。
English
Large language models (LLMs) are expected to be trained to act as agents in various real-world environments, but this process relies on rich and varied tool-interaction sandboxes. However, access to real systems is often restricted; LLM-simulated environments are prone to hallucinations and inconsistencies; and manually built sandboxes are hard to scale. In this paper, we propose EnvScaler, an automated framework for scalable tool-interaction environments via programmatic synthesis. EnvScaler comprises two components. First, SkelBuilder constructs diverse environment skeletons through topic mining, logic modeling, and quality evaluation. Then, ScenGenerator generates multiple task scenarios and rule-based trajectory validation functions for each environment. With EnvScaler, we synthesize 191 environments and about 7K scenarios, and apply them to Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) for Qwen3 series models. Results on three benchmarks show that EnvScaler significantly improves LLMs' ability to solve tasks in complex environments involving multi-turn, multi-tool interactions. We release our code and data at https://github.com/RUC-NLPIR/EnvScaler.
PDF244January 13, 2026