ChatPaper.aiChatPaper

智能奖励的错误类型标注:通过错误感知的层次化监督优化过程奖励模型

Error Typing for Smarter Rewards: Improving Process Reward Models with Error-Aware Hierarchical Supervision

May 26, 2025
作者: Tej Deep Pala, Panshul Sharma, Amir Zadeh, Chuan Li, Soujanya Poria
cs.AI

摘要

大型语言模型(LLMs)在诸如数学问题求解等多跳和推理密集型任务中易产生幻觉。尽管结果奖励模型仅验证最终答案,过程奖励模型(PRMs)则对每个中间步骤进行评分,以引导生成连贯的解决方案。我们提出了PathFinder-PRM,一种新颖的、层次化且具备错误感知能力的判别式PRM,它首先在每个步骤中分类数学错误和一致性错误,随后结合这些细粒度信号来估计步骤的正确性。为了训练PathFinder-PRM,我们构建了一个包含40万样本的数据集,通过丰富人工标注的PRM800K语料库和RLHFlow Mistral轨迹,并添加了三维步骤级标签。在PRMBench上,PathFinder-PRM以67.7的PRMScore刷新了记录,超越了之前的最佳成绩(65.5),同时使用的数据量减少了三倍。当应用于奖励引导的贪婪搜索时,我们的模型实现了prm@8 48.3,比最强基线提高了1.5分。这些结果表明,解耦的错误检测与奖励估计不仅增强了细粒度错误检测,还显著提升了端到端、奖励引导的数学推理能力,并实现了更高的数据效率。
English
Large Language Models (LLMs) are prone to hallucination, especially during multi-hop and reasoning-intensive tasks such as mathematical problem solving. While Outcome Reward Models verify only final answers, Process Reward Models (PRMs) score each intermediate step to steer generation toward coherent solutions. We introduce PathFinder-PRM, a novel hierarchical, error-aware discriminative PRM that first classifies math and consistency errors at each step, then combines these fine-grained signals to estimate step correctness. To train PathFinder-PRM, we construct a 400K-sample dataset by enriching the human-annotated PRM800K corpus and RLHFlow Mistral traces with three-dimensional step-level labels. On PRMBench, PathFinder-PRM achieves a new state-of-the-art PRMScore of 67.7, outperforming the prior best (65.5) while using 3 times less data. When applied to reward guided greedy search, our model yields prm@8 48.3, a +1.5 point gain over the strongest baseline. These results demonstrate that decoupled error detection and reward estimation not only boost fine-grained error detection but also substantially improve end-to-end, reward-guided mathematical reasoning with greater data efficiency.

Summary

AI-Generated Summary

PDF32May 27, 2025