ChatPaper.aiChatPaper

Klear-Reasoner:通過梯度保持剪裁策略優化提升推理能力

Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization

August 11, 2025
作者: Zhenpeng Su, Leiyu Pan, Xue Bai, Dening Liu, Guanting Dong, Jiaming Huang, Wenping Hu, Guorui Zhou
cs.AI

摘要

我們介紹了Klear-Reasoner,這是一個具備長程推理能力的模型,在解決問題時展現出細緻的思考過程,並在多項基準測試中取得了卓越的成績。儘管當前學術界已有許多與推理模型相關的優秀工作,但由於訓練細節的不完全公開,複現高性能推理模型仍存在諸多問題。本報告深入分析了推理模型,涵蓋了從數據準備、長鏈思維監督微調(long CoT SFT)到強化學習(RL)的整個訓練後工作流程,並對每個實驗組件進行了詳細的消融研究。對於SFT數據,我們的實驗表明,少量高質量的數據源比大量多樣化的數據源更為有效,且困難樣本在不進行準確率篩選的情況下也能取得更好的結果。此外,我們探討了當前RL中剪裁機制的兩個關鍵問題:剪裁抑制了關鍵的探索信號,並忽略了次優軌跡。為應對這些挑戰,我們提出了梯度保留剪裁策略優化(GPPO),該方法從剪裁的標記中溫和地反向傳播梯度。GPPO不僅增強了模型的探索能力,還提高了其從負樣本中學習的效率。Klear-Reasoner在數學和編程方面展現出非凡的推理能力,在AIME 2024上得分90.5%,在AIME 2025上得分83.2%,在LiveCodeBench V5上得分66.0%,在LiveCodeBench V6上得分58.1%。
English
We present Klear-Reasoner, a model with long reasoning capabilities that demonstrates careful deliberation during problem solving, achieving outstanding performance across multiple benchmarks. Although there are already many excellent works related to inference models in the current community, there are still many problems with reproducing high-performance inference models due to incomplete disclosure of training details. This report provides an in-depth analysis of the reasoning model, covering the entire post-training workflow from data preparation and long Chain-of-Thought supervised fine-tuning (long CoT SFT) to reinforcement learning (RL), along with detailed ablation studies for each experimental component. For SFT data, our experiments show that a small number of high-quality data sources are more effective than a large number of diverse data sources, and that difficult samples can achieve better results without accuracy filtering. In addition, we investigate two key issues with current clipping mechanisms in RL: Clipping suppresses critical exploration signals and ignores suboptimal trajectories. To address these challenges, we propose Gradient-Preserving clipping Policy Optimization (GPPO) that gently backpropagates gradients from clipped tokens. GPPO not only enhances the model's exploration capacity but also improves its efficiency in learning from negative samples. Klear-Reasoner exhibits exceptional reasoning abilities in mathematics and programming, scoring 90.5\% on AIME 2024, 83.2\% on AIME 2025, 66.0\% on LiveCodeBench V5 and 58.1\% on LiveCodeBench V6.
PDF344August 12, 2025