ChatPaper.aiChatPaper

T^2PO:基于不确定性引导探索控制的稳定多轮智能体强化学习

T^2PO: Uncertainty-Guided Exploration Control for Stable Multi-Turn Agentic Reinforcement Learning

May 4, 2026
作者: Haixin Wang, Hejie Cui, Chenwei Zhang, Xin Liu, Shuowei Jin, Shijie Geng, Xinyang Zhang, Nasser Zalmout, Zhenyu Shi, Yizhou Sun
cs.AI

摘要

近期,多轮强化学习的进展显著提升了推理大语言模型在复杂交互任务中的表现。尽管通过细粒度信用分配和轨迹过滤等稳定化技术取得了进步,训练不稳定性仍普遍存在,并常导致训练崩溃。我们认为这种不稳定源于多轮场景下的低效探索——策略持续生成低信息量行动,既未能减少不确定性又无助于任务推进。为此,我们提出基于令牌与轮次的双层级策略优化框架(T²PO),该不确定性感知框架可在细粒度层面显式控制探索过程。在令牌层级,T²PO监测不确定性动态变化,当边际不确定性变化低于阈值时触发思考干预;在轮次层级,系统识别探索进展可忽略的交互回合,动态重采样以避免无效推演。我们在WebShop、ALFWorld和Search QA等多样化环境中评估T²PO,结果表明该框架通过提升探索效率,在训练稳定性和性能表现上均取得实质性突破。代码已开源:https://github.com/WillDreamer/T2PO。
English
Recent progress in multi-turn reinforcement learning (RL) has significantly improved reasoning LLMs' performances on complex interactive tasks. Despite advances in stabilization techniques such as fine-grained credit assignment and trajectory filtering, instability remains pervasive and often leads to training collapse. We argue that this instability stems from inefficient exploration in multi-turn settings, where policies continue to generate low-information actions that neither reduce uncertainty nor advance task progress. To address this issue, we propose Token- and Turn-level Policy Optimization (T^2PO), an uncertainty-aware framework that explicitly controls exploration at fine-grained levels. At the token level, T^2PO monitors uncertainty dynamics and triggers a thinking intervention once the marginal uncertainty change falls below a threshold. At the turn level, T^2PO identifies interactions with negligible exploration progress and dynamically resamples such turns to avoid wasted rollouts. We evaluate T^2PO in diverse environments, including WebShop, ALFWorld, and Search QA, demonstrating substantial gains in training stability and performance improvements with better exploration efficiency. Code is available at: https://github.com/WillDreamer/T2PO.
PDF41May 6, 2026