ChatPaper.aiChatPaper

GEPA:反思式提示进化可超越强化学习

GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning

July 25, 2025
作者: Lakshya A Agrawal, Shangyin Tan, Dilara Soylu, Noah Ziems, Rishi Khare, Krista Opsahl-Ong, Arnav Singhvi, Herumb Shandilya, Michael J Ryan, Meng Jiang, Christopher Potts, Koushik Sen, Alexandros G. Dimakis, Ion Stoica, Dan Klein, Matei Zaharia, Omar Khattab
cs.AI

摘要

大型语言模型(LLMs)正越来越多地通过强化学习(RL)方法,如群体相对策略优化(GRPO),适应下游任务,这通常需要数千次模拟来学习新任务。我们认为,与源自稀疏标量奖励的策略梯度相比,语言的可解释性往往能为LLMs提供更为丰富的学习媒介。为验证这一点,我们引入了GEPA(遗传-帕累托),一种全面融合自然语言反思的提示优化器,旨在从试错中学习高级规则。针对任何包含一个或多个LLM提示的AI系统,GEPA会采样系统级轨迹(如推理、工具调用及工具输出),并以自然语言进行反思,以诊断问题、提出并测试提示更新,并整合来自其自身尝试帕累托前沿的互补经验。得益于GEPA的设计,它往往能将仅有的几次模拟转化为显著的性能提升。在四项任务中,GEPA平均超越GRPO 10%,最高达20%,同时使用的模拟次数最多减少35倍。此外,GEPA在两个LLM上均领先于主流提示优化器MIPROv2超过10%,并在代码优化的推理时搜索策略中展现出令人鼓舞的成果。
English
Large language models (LLMs) are increasingly adapted to downstream tasks via reinforcement learning (RL) methods like Group Relative Policy Optimization (GRPO), which often require thousands of rollouts to learn new tasks. We argue that the interpretable nature of language can often provide a much richer learning medium for LLMs, compared with policy gradients derived from sparse, scalar rewards. To test this, we introduce GEPA (Genetic-Pareto), a prompt optimizer that thoroughly incorporates natural language reflection to learn high-level rules from trial and error. Given any AI system containing one or more LLM prompts, GEPA samples system-level trajectories (e.g., reasoning, tool calls, and tool outputs) and reflects on them in natural language to diagnose problems, propose and test prompt updates, and combine complementary lessons from the Pareto frontier of its own attempts. As a result of GEPA's design, it can often turn even just a few rollouts into a large quality gain. Across four tasks, GEPA outperforms GRPO by 10% on average and by up to 20%, while using up to 35x fewer rollouts. GEPA also outperforms the leading prompt optimizer, MIPROv2, by over 10% across two LLMs, and demonstrates promising results as an inference-time search strategy for code optimization.
PDF163July 28, 2025