ChatPaper.aiChatPaper

RL全能:在完全动态强化学习系统中构建环境、策略与奖励模型

RLAnything: Forge Environment, Policy, and Reward Model in Completely Dynamic RL System

February 2, 2026
作者: Yinjie Wang, Tianbao Xie, Ke Shen, Mengdi Wang, Ling Yang
cs.AI

摘要

我们提出RLAnything框架——一种通过闭环优化动态构建环境、策略与奖励模型的强化学习系统,能够增强学习信号并强化适用于各类大语言模型及智能体场景的RL系统。具体而言,该框架通过整合步骤级信号与结果信号的反馈来训练策略模型,同时利用一致性反馈联合优化奖励模型,进而反哺策略训练。此外,我们基于理论推导的自动环境适配机制,借助策略与奖励模型的批判性反馈实现经验学习,从而提升两者的训练效果。实证表明,每个新增组件都能持续提升系统整体性能:RLAnything在多项代表性LLM与智能体任务中取得显著增益,将Qwen3-VL-8B-Thinking在OSWorld上的表现提升9.1%,使Qwen2.5-7B-Instruct在AlfWorld和LiveBench上分别提升18.7%和11.9%。我们还发现经优化的奖励模型信号优于依赖人工标注的结果。代码地址:https://github.com/Gen-Verse/Open-AgentRL
English
We propose RLAnything, a reinforcement learning framework that dynamically forges environment, policy, and reward models through closed-loop optimization, amplifying learning signals and strengthening the overall RL system for any LLM or agentic scenarios. Specifically, the policy is trained with integrated feedback from step-wise and outcome signals, while the reward model is jointly optimized via consistency feedback, which in turn further improves policy training. Moreover, our theory-motivated automatic environment adaptation improves training for both the reward and policy models by leveraging critic feedback from each, enabling learning from experience. Empirically, each added component consistently improves the overall system, and RLAnything yields substantial gains across various representative LLM and agentic tasks, boosting Qwen3-VL-8B-Thinking by 9.1% on OSWorld and Qwen2.5-7B-Instruct by 18.7% and 11.9% on AlfWorld and LiveBench, respectively. We also that optimized reward-model signals outperform outcomes that rely on human labels. Code: https://github.com/Gen-Verse/Open-AgentRL
PDF333March 12, 2026