ChatPaper.aiChatPaper

Klear-Reasoner:通过梯度保持剪裁策略优化提升推理能力

Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization

August 11, 2025
作者: Zhenpeng Su, Leiyu Pan, Xue Bai, Dening Liu, Guanting Dong, Jiaming Huang, Wenping Hu, Guorui Zhou
cs.AI

摘要

我们推出Klear-Reasoner,这是一款具备长程推理能力的模型,在问题解决过程中展现出审慎的思考,在多个基准测试中取得了卓越的成绩。尽管当前社区已有众多优秀的推理模型相关研究,但由于训练细节披露不完整,复现高性能推理模型仍面临诸多挑战。本报告深入剖析了推理模型,涵盖了从数据准备、长链思维监督微调(长CoT SFT)到强化学习(RL)的完整训练后工作流程,并对每个实验组件进行了详细的消融研究。对于SFT数据,我们的实验表明,少量高质量数据源比大量多样化数据源更为有效,且困难样本无需精度筛选即可取得更佳效果。此外,我们探讨了当前RL中裁剪机制的两个关键问题:裁剪抑制了关键的探索信号,并忽视了次优轨迹。针对这些挑战,我们提出了梯度保留裁剪策略优化(GPPO),它温和地反向传播来自裁剪标记的梯度。GPPO不仅增强了模型的探索能力,还提高了其从负样本中学习的效率。Klear-Reasoner在数学和编程领域展现出非凡的推理能力,在AIME 2024上得分90.5%,在AIME 2025上得分83.2%,在LiveCodeBench V5上得分66.0%,在LiveCodeBench V6上得分58.1%。
English
We present Klear-Reasoner, a model with long reasoning capabilities that demonstrates careful deliberation during problem solving, achieving outstanding performance across multiple benchmarks. Although there are already many excellent works related to inference models in the current community, there are still many problems with reproducing high-performance inference models due to incomplete disclosure of training details. This report provides an in-depth analysis of the reasoning model, covering the entire post-training workflow from data preparation and long Chain-of-Thought supervised fine-tuning (long CoT SFT) to reinforcement learning (RL), along with detailed ablation studies for each experimental component. For SFT data, our experiments show that a small number of high-quality data sources are more effective than a large number of diverse data sources, and that difficult samples can achieve better results without accuracy filtering. In addition, we investigate two key issues with current clipping mechanisms in RL: Clipping suppresses critical exploration signals and ignores suboptimal trajectories. To address these challenges, we propose Gradient-Preserving clipping Policy Optimization (GPPO) that gently backpropagates gradients from clipped tokens. GPPO not only enhances the model's exploration capacity but also improves its efficiency in learning from negative samples. Klear-Reasoner exhibits exceptional reasoning abilities in mathematics and programming, scoring 90.5\% on AIME 2024, 83.2\% on AIME 2025, 66.0\% on LiveCodeBench V5 and 58.1\% on LiveCodeBench V6.
PDF344August 12, 2025