ChatPaper.aiChatPaper

多智能體工具整合策略優化

Multi-Agent Tool-Integrated Policy Optimization

October 6, 2025
作者: Zhanfeng Mo, Xingxuan Li, Yuntao Chen, Lidong Bing
cs.AI

摘要

大型語言模型(LLMs)在處理知識密集型和複雜推理任務時,越來越多地依賴於多輪工具整合規劃。現有的實現通常依賴於單一代理,但這些方法存在上下文長度有限和工具響應噪聲的問題。一個自然的解決方案是採用多代理框架,通過規劃者和工作者代理來管理上下文。然而,現有方法尚不支持工具整合多代理框架的有效強化學習後訓練。為解決這一問題,我們提出了多代理工具整合策略優化(MATPO),該方法通過角色特定的提示,在單一LLM實例中訓練不同的角色(規劃者和工作者),並利用強化學習進行優化。MATPO基於規劃者和工作者輪次間的信用分配機制,這一設計消除了部署多個LLM的需求,從而節省了內存,同時保留了專業化的優勢。在GAIA-text、WebWalkerQA和FRAMES上的實驗表明,MATPO相較於單代理基線模型,平均性能提升了18.38%,並且對工具輸出的噪聲表現出更強的魯棒性。我們的研究結果強調了在單一LLM中統一多個代理角色的有效性,並為穩定高效的多代理強化學習訓練提供了實用見解。
English
Large language models (LLMs) increasingly rely on multi-turn tool-integrated planning for knowledge-intensive and complex reasoning tasks. Existing implementations typically rely on a single agent, but they suffer from limited context length and noisy tool responses. A natural solution is to adopt a multi-agent framework with planner- and worker-agents to manage context. However, no existing methods support effective reinforcement learning post-training of tool-integrated multi-agent frameworks. To address this gap, we propose Multi-Agent Tool-Integrated Policy Optimization (MATPO), which enables distinct roles (planner and worker) to be trained within a single LLM instance using role-specific prompts via reinforcement learning. MATPO is derived from a principled credit assignment mechanism across planner and worker rollouts. This design eliminates the need to deploy multiple LLMs, which would be memory-intensive, while preserving the benefits of specialization. Experiments on GAIA-text, WebWalkerQA, and FRAMES show that MATPO consistently outperforms single-agent baselines by an average of 18.38% relative improvement in performance and exhibits greater robustness to noisy tool outputs. Our findings highlight the effectiveness of unifying multiple agent roles within a single LLM and provide practical insights for stable and efficient multi-agent RL training.
PDF242October 9, 2025