ChatPaper.aiChatPaper

Retroformer:具有策略梯度優化的回顧式大型語言代理模型

Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization

August 4, 2023
作者: Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.AI

摘要

最近幾個月出現了一個強大的新趨勢,即將大型語言模型(LLMs)擴充為能夠執行面向目標的多步任務的自主語言代理,而不僅僅是回應人類用戶的查詢。然而,大多數現有的語言代理並未使用環境特定的獎勵進行優化。雖然一些代理允許通過口頭反饋進行迭代改進,但它們並不以與基於梯度學習獎勵相容的方式進行推理和規劃。本文介紹了一個有原則的框架,通過學習一個回顧模型來強化大型語言代理,該模型通過策略梯度自動調整語言代理提示,以從環境反饋中進行調整。具體來說,我們提出的代理架構從多個環境和任務中的獎勵中學習,用於微調預先訓練的語言模型,通過總結先前失敗嘗試的根本原因並提出行動計劃來完善語言代理提示。在各種任務上的實驗結果表明,語言代理隨著時間的推移而改進,我們的方法明顯優於未能充分利用來自環境梯度的基準線。這表明使用策略梯度優化來改進語言代理,我們認為我們的工作是其中之一,似乎很有前途,可以應用於優化代理架構中的其他模型,以隨著時間的推移增強代理性能。
English
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
PDF201December 15, 2024