ChatPaper.aiChatPaper

MPO:透過元計劃優化提升大型語言模型代理效能

MPO: Boosting LLM Agents with Meta Plan Optimization

March 4, 2025
作者: Weimin Xiong, Yifan Song, Qingxiu Dong, Bingchan Zhao, Feifan Song, Xun Wang, Sujian Li
cs.AI

摘要

近期大型語言模型(LLMs)的進展使得基於LLM的代理能夠成功處理互動式規劃任務。然而,儘管取得了這些成功,現有方法常常面臨規劃幻覺問題,並且需要針對每個新代理進行重新訓練。為了解決這些挑戰,我們提出了元計劃優化(Meta Plan Optimization, MPO)框架,該框架通過直接整合明確指導來增強代理的規劃能力。與以往依賴複雜知識的方法不同,這些方法要么需要大量人力投入,要么缺乏質量保證,MPO則利用高層次的通用指導,通過元計劃來輔助代理規劃,並基於代理任務執行的反饋持續優化元計劃。我們在兩個代表性任務上進行的實驗表明,MPO顯著優於現有的基線方法。此外,我們的分析指出,MPO提供了一種即插即用的解決方案,能夠在先前未見的場景中提升任務完成效率和泛化能力。
English
Recent advancements in large language models (LLMs) have enabled LLM-based agents to successfully tackle interactive planning tasks. However, despite their successes, existing approaches often suffer from planning hallucinations and require retraining for each new agent. To address these challenges, we propose the Meta Plan Optimization (MPO) framework, which enhances agent planning capabilities by directly incorporating explicit guidance. Unlike previous methods that rely on complex knowledge, which either require significant human effort or lack quality assurance, MPO leverages high-level general guidance through meta plans to assist agent planning and enables continuous optimization of the meta plans based on feedback from the agent's task execution. Our experiments conducted on two representative tasks demonstrate that MPO significantly outperforms existing baselines. Moreover, our analysis indicates that MPO provides a plug-and-play solution that enhances both task completion efficiency and generalization capabilities in previous unseen scenarios.

Summary

AI-Generated Summary

PDF272March 5, 2025