ChatPaper.aiChatPaper

通过强化学习训练长上下文、多轮交互的软件工程智能体

Training Long-Context, Multi-Turn Software Engineering Agents with Reinforcement Learning

August 5, 2025
作者: Alexander Golubev, Maria Trofimova, Sergei Polezhaev, Ibragim Badertdinov, Maksim Nekrashevich, Anton Shevtsov, Simon Karasik, Sergey Abramov, Andrei Andriushchenko, Filipp Fisin, Sergei Skvortsov, Boris Yangel
cs.AI

摘要

关于强化学习(RL)在大语言模型(LLMs)应用的研究,主要集中在单轮问题上,如数学推理或单次代码生成。尽管这些问题可被视为令牌级别的多轮马尔可夫决策过程(MDPs),但这种视角对应的是环境不提供反馈的多轮交互退化情形。这与许多现实世界领域形成鲜明对比,例如软件工程(SWE),这些领域需要与有状态环境进行丰富的多轮交互,环境对每个动作都会给出非平凡的反馈。 为弥合这一差距,我们展示了RL在这一通用领域的成功应用。通过改进的解耦优势策略优化(DAPO)算法,我们训练了一个基于Qwen2.5-72B-Instruct的代理,以解决现实世界的软件工程任务。我们的方法将代理在SWE-bench Verified基准上的成功率从20%的拒绝微调基线提升至39%,且无需依赖任何教师模型。在SWE-rebench上,我们的代理在相同框架下与DeepSeek-V3-0324和Qwen3-235B-A22B等领先的开源权重模型持平或超越,为基于开放模型构建更强大的自主代理以应对复杂现实问题提供了可行路径。
English
Research on applications of Reinforcement Learning (RL) to Large Language Models (LLMs) has mostly been focused on single-turn problems, such as mathematical reasoning or single-shot code generation. While these problems can be viewed as token-level multi-turn MDPs, this view corresponds to a degenerate case of multi-turn interaction where the environment provides no feedback. This contrasts with many real-world domains, such as software engineering (SWE), which require rich multi-turn interactions with a stateful environment that responds to each action with a non-trivial observation. To bridge this gap, we demonstrate the successful application of RL to this general regime. Using a modified Decoupled Advantage Policy Optimization (DAPO) algorithm, we train an agent based on Qwen2.5-72B-Instruct to solve real-world software engineering tasks. Our approach increases the agent's success rate on the SWE-bench Verified benchmark from a 20% rejection fine-tuned baseline to 39%, without relying on any teacher models. On SWE-rebench, our agent matches or outperforms leading open-weight models such as DeepSeek-V3-0324 and Qwen3-235B-A22B using an identical scaffolding, offering a viable path toward building more capable autonomous agents for complex real-world problems based on open models.
PDF474August 7, 2025