ChatPaper.aiChatPaper

SPC:通过对抗性游戏进化自博弈评判器以提升大语言模型推理能力

SPC: Evolving Self-Play Critic via Adversarial Games for LLM Reasoning

April 27, 2025
作者: Jiaqi Chen, Bang Zhang, Ruotian Ma, Peisong Wang, Xiaodan Liang, Zhaopeng Tu, Xiaolong Li, Kwan-Yee K. Wong
cs.AI

摘要

评估大型语言模型(LLM)逐步推理的可靠性,如思维链(Chain-of-Thought),由于获取高质量步骤级监督的难度和成本,仍然具有挑战性。本文提出了一种名为自我对抗批评者(Self-Play Critic, SPC)的新方法,其中批评者模型通过对抗性自我博弈游戏逐步提升其评估推理步骤的能力,从而无需手动进行步骤级标注。SPC方法涉及微调基础模型的两个副本,分别扮演两个角色:一个是“狡猾生成器”,故意生成难以检测的错误步骤;另一个是“批评者”,负责分析推理步骤的正确性。这两个模型进行对抗性博弈,生成器旨在欺骗批评者,而批评者模型则努力识别生成器的错误。基于博弈结果的强化学习使得模型迭代改进;每次对抗的胜者获得正向奖励,败者获得负向奖励,推动持续自我进化。在三个推理过程基准测试(ProcessBench、PRM800K、DeltaBench)上的实验表明,我们的SPC逐步提升了其错误检测能力(例如,在ProcessBench上的准确率从70.8%提升至77.7%),并超越了包括蒸馏R1模型在内的强基线。此外,将SPC应用于指导多种LLM的测试时搜索,显著提升了它们在MATH500和AIME2024上的数学推理性能,超越了当前最先进的进程奖励模型。
English
Evaluating the step-by-step reliability of large language model (LLM) reasoning, such as Chain-of-Thought, remains challenging due to the difficulty and cost of obtaining high-quality step-level supervision. In this paper, we introduce Self-Play Critic (SPC), a novel approach where a critic model evolves its ability to assess reasoning steps through adversarial self-play games, eliminating the need for manual step-level annotation. SPC involves fine-tuning two copies of a base model to play two roles, namely a "sneaky generator" that deliberately produces erroneous steps designed to be difficult to detect, and a "critic" that analyzes the correctness of reasoning steps. These two models engage in an adversarial game in which the generator aims to fool the critic, while the critic model seeks to identify the generator's errors. Using reinforcement learning based on the game outcomes, the models iteratively improve; the winner of each confrontation receives a positive reward and the loser receives a negative reward, driving continuous self-evolution. Experiments on three reasoning process benchmarks (ProcessBench, PRM800K, DeltaBench) demonstrate that our SPC progressively enhances its error detection capabilities (e.g., accuracy increases from 70.8% to 77.7% on ProcessBench) and surpasses strong baselines, including distilled R1 model. Furthermore, applying SPC to guide the test-time search of diverse LLMs significantly improves their mathematical reasoning performance on MATH500 and AIME2024, outperforming state-of-the-art process reward models.

Summary

AI-Generated Summary

PDF132April 29, 2025