ChatPaper.aiChatPaper

超越马尔可夫性:通过贝叶斯自适应强化学习实现大语言模型的反思性探索

Beyond Markovian: Reflective Exploration via Bayes-Adaptive RL for LLM Reasoning

May 26, 2025
作者: Shenao Zhang, Yaqing Wang, Yinxiao Liu, Tianqi Liu, Peter Grabowski, Eugene Ie, Zhaoran Wang, Yunxuan Li
cs.AI

摘要

通过强化学习(RL)训练的大型语言模型(LLMs)已展现出强大的推理能力和涌现的反思行为,如回溯与错误修正。然而,传统的马尔可夫强化学习将探索局限于训练阶段,以学习最优确定性策略,并仅通过当前状态依赖历史上下文。因此,尚不清楚反思推理是否会在马尔可夫强化学习训练期间自然涌现,或为何它们在测试时具有优势。为解决这一问题,我们将反思探索重新置于贝叶斯自适应强化学习框架中,该框架明确优化了在马尔可夫决策过程后验分布下的期望回报。这一贝叶斯公式通过信念更新,内在激励了奖励最大化的利用与信息收集的探索。我们提出的算法BARL指导LLM根据观察结果拼接和切换策略,为模型何时及如何进行反思探索提供了原则性指导。在合成任务和数学推理任务上的实证结果表明,BARL在测试时优于标准马尔可夫强化学习方法,以更高的探索效率实现了更优的令牌利用率。我们的代码已发布于https://github.com/shenao-zhang/BARL。
English
Large Language Models (LLMs) trained via Reinforcement Learning (RL) have exhibited strong reasoning capabilities and emergent reflective behaviors, such as backtracking and error correction. However, conventional Markovian RL confines exploration to the training phase to learn an optimal deterministic policy and depends on the history contexts only through the current state. Therefore, it remains unclear whether reflective reasoning will emerge during Markovian RL training, or why they are beneficial at test time. To remedy this, we recast reflective exploration within the Bayes-Adaptive RL framework, which explicitly optimizes the expected return under a posterior distribution over Markov decision processes. This Bayesian formulation inherently incentivizes both reward-maximizing exploitation and information-gathering exploration via belief updates. Our resulting algorithm, BARL, instructs the LLM to stitch and switch strategies based on the observed outcomes, offering principled guidance on when and how the model should reflectively explore. Empirical results on both synthetic and mathematical reasoning tasks demonstrate that BARL outperforms standard Markovian RL approaches at test time, achieving superior token efficiency with improved exploration effectiveness. Our code is available at https://github.com/shenao-zhang/BARL.

Summary

AI-Generated Summary

PDF52May 28, 2025