SynWorld:面向智能体行为知识精炼的虚拟场景合成
SynWorld: Virtual Scenario Synthesis for Agentic Action Knowledge Refinement
April 4, 2025
作者: Runnan Fang, Xiaobin Wang, Yuan Liang, Shuofei Qiao, Jialong Wu, Zekun Xi, Ningyu Zhang, Yong Jiang, Pengjun Xie, Fei Huang, Huajun Chen
cs.AI
摘要
在智能体与其环境的交互过程中,智能体通过规划并执行行动来扩展其能力。然而,基于大语言模型(LLM)的智能体在部署于新环境或需驾驭非传统行动空间时,面临重大挑战。为赋予智能体自主探索环境、优化工作流程及深化对行动理解的能力,我们提出了SynWorld框架。该框架使智能体能够在行动空间内合成多步骤行动调用的可能场景,并执行蒙特卡洛树搜索(MCTS)探索,以在当前环境中有效精炼其行动知识。实验表明,SynWorld是一种在新环境中学习行动知识的有效且通用的方法。代码已发布于https://github.com/zjunlp/SynWorld。
English
In the interaction between agents and their environments, agents expand their
capabilities by planning and executing actions. However, LLM-based agents face
substantial challenges when deployed in novel environments or required to
navigate unconventional action spaces. To empower agents to autonomously
explore environments, optimize workflows, and enhance their understanding of
actions, we propose SynWorld, a framework that allows agents to synthesize
possible scenarios with multi-step action invocation within the action space
and perform Monte Carlo Tree Search (MCTS) exploration to effectively refine
their action knowledge in the current environment. Our experiments demonstrate
that SynWorld is an effective and general approach to learning action knowledge
in new environments. Code is available at https://github.com/zjunlp/SynWorld.