ChatPaper.aiChatPaper

StraTA:基于战略轨迹抽象的任务驱动型强化学习激励机制

StraTA: Incentivizing Agentic Reinforcement Learning with Strategic Trajectory Abstraction

May 7, 2026
作者: Xiangyuan Xue, Yifan Zhou, Zidong Wang, Shengji Tang, Philip Torr, Wanli Ouyang, Lei Bai, Zhenfei Yin
cs.AI

摘要

大型语言模型(LLMs)作为交互式智能体的应用日益广泛,但针对长周期决策的优化仍存在困难——当前方法主要依赖被动响应机制,这削弱了长轨迹中的探索能力与信用分配效率。本研究提出战略轨迹抽象框架(StraTA),通过在智能体强化学习(RL)中引入显式的轨迹级策略来解决该问题。StraTA从初始任务状态中采样生成精简策略,以其为条件指导后续行动,并通过分层GRPO式滚动设计联合训练策略生成与动作执行模块,进一步结合多样化策略滚动与关键性自评判机制增强效果。在ALFWorld、WebShop和SciWorld上的实验表明,StraTA在样本效率和最终性能上均稳定超越强基线模型:ALFWorld任务达成93.1%的成功率,WebShop达到84.2%成功率;在SciWorld中更取得63.5%的综合评分,优于前沿闭源模型表现。
English
Large language models (LLMs) are increasingly used as interactive agents, but optimizing them for long-horizon decision making remains difficult because current methods are largely purely reactive, which weakens both exploration and credit assignment over extended trajectories. In this work, we present Strategic Trajectory Abstraction (StraTA), a simple framework that introduces an explicit trajectory-level strategy into agentic reinforcement learning (RL). StraTA samples a compact strategy from the initial task state, conditions subsequent actions on that strategy, and trains strategy generation and action execution jointly with a hierarchical GRPO-style rollout design, further enhanced by diverse strategy rollout and critical self-judgment. Experiments on ALFWorld, WebShop, and SciWorld show that StraTA consistently improves both sample efficiency and final performance over strong baselines. StraTA reaches success rates of 93.1% on ALFWorld and 84.2% on WebShop. On SciWorld, StraTA attains a 63.5% overall score, outperforming frontier closed-source models.
PDF101May 9, 2026