ChatPaper.aiChatPaper

多智能体工具集成策略优化

Multi-Agent Tool-Integrated Policy Optimization

October 6, 2025
作者: Zhanfeng Mo, Xingxuan Li, Yuntao Chen, Lidong Bing
cs.AI

摘要

大型语言模型(LLMs)在处理知识密集型和复杂推理任务时,越来越多地依赖于多轮工具集成规划。现有实现通常依赖单一代理,但受限于上下文长度不足和工具响应噪声的问题。一个自然的解决方案是采用多代理框架,通过规划者与工作者代理来管理上下文。然而,现有方法尚不支持对工具集成多代理框架进行有效的强化学习后训练。为填补这一空白,我们提出了多代理工具集成策略优化(MATPO),它允许在单一LLM实例内,通过角色特定的提示,利用强化学习训练不同的角色(规划者与工作者)。MATPO基于规划者与工作者执行轨迹间的原则性信用分配机制设计,这一设计既避免了部署多个LLM带来的内存负担,又保留了角色专业化的优势。在GAIA-text、WebWalkerQA和FRAMES数据集上的实验表明,MATPO相较于单代理基线平均提升了18.38%的性能,并展现出对工具输出噪声更强的鲁棒性。我们的研究结果强调了在单一LLM内统一多代理角色的有效性,并为稳定高效的多代理强化学习训练提供了实用见解。
English
Large language models (LLMs) increasingly rely on multi-turn tool-integrated planning for knowledge-intensive and complex reasoning tasks. Existing implementations typically rely on a single agent, but they suffer from limited context length and noisy tool responses. A natural solution is to adopt a multi-agent framework with planner- and worker-agents to manage context. However, no existing methods support effective reinforcement learning post-training of tool-integrated multi-agent frameworks. To address this gap, we propose Multi-Agent Tool-Integrated Policy Optimization (MATPO), which enables distinct roles (planner and worker) to be trained within a single LLM instance using role-specific prompts via reinforcement learning. MATPO is derived from a principled credit assignment mechanism across planner and worker rollouts. This design eliminates the need to deploy multiple LLMs, which would be memory-intensive, while preserving the benefits of specialization. Experiments on GAIA-text, WebWalkerQA, and FRAMES show that MATPO consistently outperforms single-agent baselines by an average of 18.38% relative improvement in performance and exhibits greater robustness to noisy tool outputs. Our findings highlight the effectiveness of unifying multiple agent roles within a single LLM and provide practical insights for stable and efficient multi-agent RL training.
PDF242October 9, 2025