價值引導搜索:高效思維鏈推理
Value-Guided Search for Efficient Chain-of-Thought Reasoning
May 23, 2025
作者: Kaiwen Wang, Jin Peng Zhou, Jonathan Chang, Zhaolin Gao, Nathan Kallus, Kianté Brantley, Wen Sun
cs.AI
摘要
本文提出了一種簡單且高效的方法,用於在長上下文推理軌跡上訓練價值模型。與現有的過程獎勵模型(PRMs)相比,我們的方法不需要精細定義的「步驟」概念,這在長上下文推理模型中難以界定。通過收集包含250萬條推理軌跡的數據集,我們訓練了一個15億token級別的價值模型,並將其應用於DeepSeek模型,以在測試時計算資源擴展的情況下提升性能。我們發現,採用最終加權多數投票的塊狀價值引導搜索(VGS)相比標準方法(如多數投票或最佳n選一)能實現更好的測試時擴展效果。在64次生成的推理預算下,使用DeepSeek-R1-Distill-1.5B的VGS在四個競賽數學基準(AIME 2024 & 2025, HMMT Feb 2024 & 2025)上達到了45.7%的平均準確率,與o3-mini-medium持平。此外,VGS顯著降低了達到與多數投票相同性能所需的推理FLOPs。我們的數據集、模型及代碼庫均已開源。
English
In this paper, we propose a simple and efficient method for value model
training on long-context reasoning traces. Compared to existing process reward
models (PRMs), our method does not require a fine-grained notion of "step,"
which is difficult to define for long-context reasoning models. By collecting a
dataset of 2.5 million reasoning traces, we train a 1.5B token-level value
model and apply it to DeepSeek models for improved performance with test-time
compute scaling. We find that block-wise value-guided search (VGS) with a final
weighted majority vote achieves better test-time scaling than standard methods
such as majority voting or best-of-n. With an inference budget of 64
generations, VGS with DeepSeek-R1-Distill-1.5B achieves an average accuracy of
45.7% across four competition math benchmarks (AIME 2024 & 2025, HMMT Feb 2024
& 2025), reaching parity with o3-mini-medium. Moreover, VGS significantly
reduces the inference FLOPs required to achieve the same performance of
majority voting. Our dataset, model and codebase are open-sourced.Summary
AI-Generated Summary