ChatPaper.aiChatPaper

V_1:面向并行推理器的生成与自验证统一框架

V_1: Unifying Generation and Self-Verification for Parallel Reasoners

March 4, 2026
作者: Harman Singh, Xiuyu Li, Kusha Sareen, Monishwaran Maheswaran, Sijun Tan, Xiaoxia Wu, Junxiong Wang, Alpay Ariyak, Qingyang Wu, Samir Khaki, Rishabh Tiwari, Long Lian, Yucheng Lu, Boyi Li, Alane Suhr, Ben Athiwaratkun, Kurt Keutzer
cs.AI

摘要

针对复杂推理任务的测试时扩展研究表明,通过独立采样并聚合多个解决方案等方法利用推理阶段的计算资源,能显著提升任务表现。然而验证环节成为关键瓶颈:只有当正确解能在候选方案中被可靠识别时,采样策略才真正有效。现有方法通常通过标量评分对候选方案进行独立评估,但我们证明模型在成对自验证方面表现更为强大。基于这一发现,我们提出V_1框架——通过高效成对排序统一生成与验证过程。该框架包含两个组件:V_1-Infer采用基于锦标赛排序的不确定性引导算法,动态分配自验证计算资源至相对正确性最不确定的候选对;V_1-PairRL则作为联合训练框架,使单一模型兼具生成器与成对自验证器功能,确保验证器能适配生成器的动态分布。在代码生成(LiveCodeBench、CodeContests、SWE-Bench)和数学推理(AIME、HMMT)基准测试中,V_1-Infer将Pass@1指标较点式验证提升最高达10%,在显著提升效率的同时优于近期测试时扩展方法。此外,V_1-PairRL在测试时扩展方面较标准强化学习与点式联合训练提升7-9%,在代码生成场景下将基础Pass@1指标较标准强化学习最高提升8.7%。
English
Test-time scaling for complex reasoning tasks shows that leveraging inference-time compute, by methods such as independently sampling and aggregating multiple solutions, results in significantly better task outcomes. However, a critical bottleneck is verification: sampling is only effective if correct solutions can be reliably identified among candidates. While existing approaches typically evaluate candidates independently via scalar scoring, we demonstrate that models are substantially stronger at pairwise self-verification. Leveraging this insight, we introduce V_1, a framework that unifies generation and verification through efficient pairwise ranking. V_1 comprises two components: V_1-Infer, an uncertainty-guided algorithm using a tournament-based ranking that dynamically allocates self-verification compute to candidate pairs whose relative correctness is most uncertain; and V_1-PairRL, an RL framework that jointly trains a single model as both generator and pairwise self-verifier, ensuring the verifier adapts to the generator's evolving distribution. On code generation (LiveCodeBench, CodeContests, SWE-Bench) and math reasoning (AIME, HMMT) benchmarks, V_1-Infer improves Pass@1 by up to 10% over pointwise verification and outperforms recent test-time scaling methods while being significantly more efficient. Furthermore, V_1-PairRL achieves 7--9% test-time scaling gains over standard RL and pointwise joint training, and improves base Pass@1 by up to 8.7% over standard RL in a code-generation setting.
PDF113March 6, 2026