ChatPaper.aiChatPaper

在线因果卡尔曼滤波:实现稳定有效的策略优化

Online Causal Kalman Filtering for Stable and Effective Policy Optimization

February 11, 2026
作者: Shuo He, Lang Feng, Xin Cheng, Lei Feng, Bo An
cs.AI

摘要

针对大型语言模型的强化学习存在高方差的分词级重要性采样比率问题,这会在大规模训练时破坏策略优化的稳定性。为提升稳定性,现有方法通常对序列中所有分词采用固定的序列级IS比率,或单独调整每个分词的IS比率,却忽略了序列中分词间的时序异策略推导。本文首先通过实证研究发现,局部异策略偏差在分词层面存在结构性不一致,可能扭曲相邻分词间的策略梯度更新并导致训练崩溃。为解决该问题,我们提出在线因果卡尔曼滤波策略优化算法(KPO)。具体而言,我们将目标IS比率建模为随分词演化的潜在状态,并应用卡尔曼滤波器基于历史分词状态进行在线自回归更新,且不依赖未来分词信息。经滤波处理的IS比率在保留分词级局部结构感知变化的同时,能显著平滑噪声峰值,从而产生更稳定有效的策略更新。实验表明,在具有挑战性的数学推理数据集上,KPO相较现有最优方法取得了更优异的结果。
English
Reinforcement learning for large language models suffers from high-variance token-level importance sampling (IS) ratios, which would destabilize policy optimization at scale. To improve stability, recent methods typically use a fixed sequence-level IS ratio for all tokens in a sequence or adjust each token's IS ratio separately, thereby neglecting temporal off-policy derivation across tokens in a sequence. In this paper, we first empirically identify that local off-policy deviation is structurally inconsistent at the token level, which may distort policy-gradient updates across adjacent tokens and lead to training collapse. To address the issue, we propose Online Causal Kalman Filtering for stable and effective Policy Optimization (KPO). Concretely, we model the desired IS ratio as a latent state that evolves across tokens and apply a Kalman filter to update this state online and autoregressively based on the states of past tokens, regardless of future tokens. The resulting filtered IS ratios preserve token-wise local structure-aware variation while strongly smoothing noise spikes, yielding more stable and effective policy updates. Experimentally, KPO achieves superior results on challenging math reasoning datasets compared with state-of-the-art counterparts.
PDF122February 13, 2026