大型語言模型能否檢測長鏈思維推理中的錯誤?
Can Large Language Models Detect Errors in Long Chain-of-Thought Reasoning?
February 26, 2025
作者: Yancheng He, Shilong Li, Jiaheng Liu, Weixun Wang, Xingyuan Bu, Ge Zhang, Zhongyuan Peng, Zhaoxiang Zhang, Wenbo Su, Bo Zheng
cs.AI
摘要
近期,o1類模型引起了廣泛關注,這些模型通過生成長鏈思維(Chain-of-Thought, CoT)推理步驟來提升現有大型語言模型(Large Language Models, LLMs)的推理能力。本文中,為了理解這些長CoT的質量並衡量現有LLMs對這些長CoT的批判能力,我們引入了DeltaBench,其中包含了來自不同o1類模型(如QwQ、DeepSeek-R1)針對不同推理任務(如數學、編碼、通用推理)生成的長CoT,用以檢測長CoT推理中的錯誤。基於DeltaBench,我們首先對生成的長CoT進行細緻分析,以揭示不同o1類模型的有效性和效率。接著,我們對現有的過程獎勵模型(Process Reward Models, PRMs)和批判模型進行廣泛評估,以檢測每個標註過程中的錯誤,旨在探討現有PRMs和批判模型的邊界與限制。最後,我們希望DeltaBench能引導開發者更好地理解其模型的長CoT推理能力。
English
Recently, o1-like models have drawn significant attention, where these models
produce the long Chain-of-Thought (CoT) reasoning steps to improve the
reasoning abilities of existing Large Language Models (LLMs). In this paper, to
understand the qualities of these long CoTs and measure the critique abilities
of existing LLMs on these long CoTs, we introduce the DeltaBench, including the
generated long CoTs from different o1-like models (e.g., QwQ, DeepSeek-R1) for
different reasoning tasks (e.g., Math, Code, General Reasoning), to measure the
ability to detect errors in long CoT reasoning. Based on DeltaBench, we first
perform fine-grained analysis of the generated long CoTs to discover the
effectiveness and efficiency of different o1-like models. Then, we conduct
extensive evaluations of existing process reward models (PRMs) and critic
models to detect the errors of each annotated process, which aims to
investigate the boundaries and limitations of existing PRMs and critic models.
Finally, we hope that DeltaBench could guide developers to better understand
the long CoT reasoning abilities of their models.Summary
AI-Generated Summary