ChatPaper.aiChatPaper

基于过程挖掘的推理感知GRPO方法

Reasoning-Aware GRPO using Process Mining

October 29, 2025
作者: Taekhyun Park, Yongjae Lee, Hyerim Bae
cs.AI

摘要

基于强化学习(RL)的后训练技术对于实现大型推理模型(LRM)的多步推理能力至关重要,然而现有的奖励机制通常以结果为中心。我们提出PM4GRPO——一种具备推理感知能力的群组相对策略优化(GRPO)方法,通过在标准答案/格式奖励基础上引入针对推理过程的信号。为此,我们利用流程挖掘技术计算标量一致性奖励,用于衡量策略模型的推理过程与预训练教师模型的吻合程度。在五个基准测试上的实证结果表明,PM4GRPO显著优于现有的基于GRPO的后训练方法。这些成果证明,利用流程挖掘实现推理感知的GRPO能有效增强策略模型的推理能力。
English
Reinforcement learning (RL)-based post-training has been crucial for enabling multi-step reasoning in large reasoning models (LRMs), yet current reward schemes are typically outcome-centric. We propose PM4GRPO, a reasoning-aware Group Relative Policy Optimization (GRPO) that augments standard answer/format rewards with signals over the reasoning procedure. To this end, process mining techniques are utilized to compute a scalar conformance reward that measures how closely a policy model's reasoning aligns with the pretrained teacher model. The empirical results on five benchmarks demonstrate that PM4GRPO significantly outperforms existing methodologies for GRPO-based post-training. These results highlight that leveraging process mining for reasoning-aware GRPO effectively enhances the reasoning capabilities of policy models.
PDF411December 2, 2025