ChatPaper.aiChatPaper

代碼空間響應預言機:基於大型語言模型的可解釋多智能體策略生成

Code-Space Response Oracles: Generating Interpretable Multi-Agent Policies with Large Language Models

March 10, 2026
作者: Daniel Hennes, Zun Li, John Schultz, Marc Lanctot
cs.AI

摘要

近期多智能体强化学习领域的进展,特别是策略空间响应预言(PSRO)方法,已能在日益复杂的领域中计算近似博弈论均衡解。然而这些方法依赖深度强化学习预言机生成的"黑盒"神经网络策略,导致策略难以被解读、信任或调试。我们提出代码空间响应预言(CSRO)这一创新框架,通过用大型语言模型替代强化学习预言机来解决此问题。CSRO将最佳响应计算重构为代码生成任务,引导LLM直接生成人类可读的代码形式策略。该方法不仅能产生本质可解释的策略,还能利用LLM的预训练知识发现复杂的类人策略。我们探索了多种构建和增强基于LLM的预言机的方法:零样本提示、迭代优化以及AlphaEvolve——一个基于分布式LLM的进化系统。实验证明CSRO在保持与基线模型相当性能的同时,能生成多样化的可解释策略。本研究为多智能体学习提供了新视角,将焦点从优化不透明的策略参数转向合成可解释的算法行为。
English
Recent advances in multi-agent reinforcement learning, particularly Policy-Space Response Oracles (PSRO), have enabled the computation of approximate game-theoretic equilibria in increasingly complex domains. However, these methods rely on deep reinforcement learning oracles that produce `black-box' neural network policies, making them difficult to interpret, trust or debug. We introduce Code-Space Response Oracles (CSRO), a novel framework that addresses this challenge by replacing RL oracles with Large Language Models (LLMs). CSRO reframes the best response computation as a code generation task, prompting an LLM to generate policies directly as human-readable code. This approach not only yields inherently interpretable policies but also leverages the LLM's pretrained knowledge to discover complex, human-like strategies. We explore multiple ways to construct and enhance an LLM-based oracle: zero-shot prompting, iterative refinement and AlphaEvolve, a distributed LLM-based evolutionary system. We demonstrate that CSRO achieves performance competitive with baselines while producing a diverse set of explainable policies. Our work presents a new perspective on multi-agent learning, shifting the focus from optimizing opaque policy parameters to synthesizing interpretable algorithmic behavior.
PDF10March 13, 2026