体验式强化学习
Experiential Reinforcement Learning
February 15, 2026
作者: Taiwei Shi, Sihao Chen, Bowen Jiang, Linxin Song, Longqi Yang, Jieyu Zhao
cs.AI
摘要
强化学习已成为语言模型从环境奖励或反馈中学习的核心方法。实践中,环境反馈通常具有稀疏性和延迟性。从这类信号中学习具有挑战性,因为语言模型必须隐式推断如何将观察到的失败转化为未来迭代中的行为调整。我们提出经验强化学习(ERL),这是一种在强化学习过程中嵌入显式经验-反思-巩固循环的训练范式。针对给定任务,模型首先生成初始尝试,接收环境反馈后生成反思指引,进而指导生成优化的二次尝试,其成功经验将被强化并内化至基础策略中。该过程将反馈转化为结构化的行为修正,在提升探索效率、稳定优化过程的同时,无需额外推理成本即可保持部署时的性能增益。在稀疏奖励控制环境和智能体推理基准测试中,ERL相较于强基线强化学习方法持续提升学习效率和最终性能,在复杂多步环境中实现最高达81%的性能提升,在工具使用推理任务中取得最高11%的改进。这些结果表明,将显式自我反思融入策略训练,为将反馈转化为持久的行为改进提供了实用机制。
English
Reinforcement learning has become the central approach for language models (LMs) to learn from environmental reward or feedback. In practice, the environmental feedback is usually sparse and delayed. Learning from such signals is challenging, as LMs must implicitly infer how observed failures should translate into behavioral changes for future iterations. We introduce Experiential Reinforcement Learning (ERL), a training paradigm that embeds an explicit experience-reflection-consolidation loop into the reinforcement learning process. Given a task, the model generates an initial attempt, receives environmental feedback, and produces a reflection that guides a refined second attempt, whose success is reinforced and internalized into the base policy. This process converts feedback into structured behavioral revision, improving exploration and stabilizing optimization while preserving gains at deployment without additional inference cost. Across sparse-reward control environments and agentic reasoning benchmarks, ERL consistently improves learning efficiency and final performance over strong reinforcement learning baselines, achieving gains of up to +81% in complex multi-step environments and up to +11% in tool-using reasoning tasks. These results suggest that integrating explicit self-reflection into policy training provides a practical mechanism for transforming feedback into durable behavioral improvement.