ASPO:非对称重要性采样策略优化
ASPO: Asymmetric Importance Sampling Policy Optimization
October 7, 2025
作者: Jiakang Wang, Runze Liu, Lei Lin, Wenping Hu, Xiu Li, Fuzheng Zhang, Guorui Zhou, Kun Gai
cs.AI
摘要
近期的大型语言模型(LLM)后训练方法依赖于强化学习(RL)过程中的令牌级裁剪机制。然而,我们发现了这种基于结果监督的强化学习(OSRL)范式中的一个根本性缺陷:正优势令牌的重要性采样(IS)比率不匹配,导致正负令牌的权重分配失衡。这种不匹配抑制了低概率令牌的更新,同时过度放大了已经高概率的令牌。为解决这一问题,我们提出了非对称重要性采样策略优化(ASPO),采用了一种简单而有效的策略,即翻转正优势令牌的IS比率,使其更新方向与负令牌的学习动态保持一致。ASPO进一步引入了软双裁剪机制,以在保持梯度流动的同时稳定极端更新。在编码和数学推理基准上的全面实验表明,ASPO显著缓解了早熟收敛问题,提升了训练稳定性,并在基于GRPO的强基线基础上提高了最终性能。我们的分析为OSRL中令牌级权重的作用提供了新的见解,并强调了在LLM RL中纠正IS的至关重要性。ASPO的代码和模型可在https://github.com/wizard-III/Archer2.0获取。
English
Recent Large Language Model (LLM) post-training methods rely on token-level
clipping mechanisms during Reinforcement Learning (RL). However, we identify a
fundamental flaw in this Outcome-Supervised RL (OSRL) paradigm: the Importance
Sampling (IS) ratios of positive-advantage tokens are mismatched, leading to
unbalanced token weighting for positive and negative tokens. This mismatch
suppresses the update of low-probability tokens while over-amplifying already
high-probability ones. To address this, we propose Asymmetric Importance
Sampling Policy Optimization (ASPO), which uses a simple yet effective strategy
that flips the IS ratios of positive-advantage tokens, aligning their update
direction with the learning dynamics of negative ones. AIS further incorporates
a soft dual-clipping mechanism to stabilize extreme updates while maintaining
gradient flow. Comprehensive experiments on coding and mathematical reasoning
benchmarks demonstrate that ASPO significantly mitigates premature convergence,
improves training stability, and enhances final performance over strong
GRPO-based baselines. Our analysis provides new insights into the role of
token-level weighting in OSRL and highlights the critical importance of
correcting IS in LLM RL. The code and models of ASPO are available at
https://github.com/wizard-III/Archer2.0.