GEPA:反思性提示演化可超越強化學習
GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning
July 25, 2025
作者: Lakshya A Agrawal, Shangyin Tan, Dilara Soylu, Noah Ziems, Rishi Khare, Krista Opsahl-Ong, Arnav Singhvi, Herumb Shandilya, Michael J Ryan, Meng Jiang, Christopher Potts, Koushik Sen, Alexandros G. Dimakis, Ion Stoica, Dan Klein, Matei Zaharia, Omar Khattab
cs.AI
摘要
大型語言模型(LLMs)正越來越多地通過強化學習(RL)方法,如群體相對策略優化(GRPO),來適應下游任務,這些方法通常需要數千次模擬來學習新任務。我們認為,與來自稀疏、標量獎勵的策略梯度相比,語言的解釋性本質往往能為LLMs提供更豐富的學習媒介。為驗證這一點,我們引入了GEPA(遺傳-帕累托),這是一種提示優化器,它徹底整合了自然語言反思,以從試錯中學習高層次規則。對於任何包含一個或多個LLM提示的AI系統,GEPA會採樣系統級軌跡(例如,推理、工具調用和工具輸出),並用自然語言對其進行反思,以診斷問題、提出並測試提示更新,並結合來自其自身嘗試的帕累托前沿的互補教訓。由於GEPA的設計,它通常能將僅有的幾次模擬轉化為顯著的質量提升。在四項任務中,GEPA平均比GRPO高出10%,最高可達20%,同時使用的模擬次數最多減少35倍。GEPA還在兩種LLMs上領先於主要的提示優化器MIPROv2,超過10%,並展示了作為代碼優化的推理時搜索策略的潛力。
English
Large language models (LLMs) are increasingly adapted to downstream tasks via
reinforcement learning (RL) methods like Group Relative Policy Optimization
(GRPO), which often require thousands of rollouts to learn new tasks. We argue
that the interpretable nature of language can often provide a much richer
learning medium for LLMs, compared with policy gradients derived from sparse,
scalar rewards. To test this, we introduce GEPA (Genetic-Pareto), a prompt
optimizer that thoroughly incorporates natural language reflection to learn
high-level rules from trial and error. Given any AI system containing one or
more LLM prompts, GEPA samples system-level trajectories (e.g., reasoning, tool
calls, and tool outputs) and reflects on them in natural language to diagnose
problems, propose and test prompt updates, and combine complementary lessons
from the Pareto frontier of its own attempts. As a result of GEPA's design, it
can often turn even just a few rollouts into a large quality gain. Across four
tasks, GEPA outperforms GRPO by 10% on average and by up to 20%, while using up
to 35x fewer rollouts. GEPA also outperforms the leading prompt optimizer,
MIPROv2, by over 10% across two LLMs, and demonstrates promising results as an
inference-time search strategy for code optimization.