ChatPaper.aiChatPaper

推理鏈的演繹驗證

Deductive Verification of Chain-of-Thought Reasoning

June 6, 2023
作者: Zhan Ling, Yunhao Fang, Xuanlin Li, Zhiao Huang, Mingu Lee, Roland Memisevic, Hao Su
cs.AI

摘要

大型語言模型(LLMs)在執行各種推理任務時,顯著受益於「Chain-of-Thought」(CoT)提示。雖然 CoT 允許模型產生更全面的推理過程,但其強調中間推理步驟可能會無意中引入幻覺和累積錯誤,從而限制模型解決複雜推理任務的能力。受到人類如何從事謹慎和細緻的演繹邏輯推理過程以解決任務的啟發,我們致力於使語言模型能夠執行明確和嚴謹的演繹推理,並通過自我驗證確保其推理過程的可信性。然而,即使使用像 ChatGPT 這樣的先進模型,直接驗證整個演繹推理過程的有效性也是具有挑戰性的。鑑於此,我們提議將推理驗證過程分解為一系列逐步子過程,每個子過程僅接收其必要的上下文和前提。為了促進這個過程,我們提出了「自然程序」,這是一種基於自然語言的演繹推理格式。我們的方法使模型能夠生成精確的推理步驟,其中後續步驟更嚴謹地建立在前一步驟之上。它還賦予語言模型以逐步方式進行推理自我驗證的能力。通過將這個驗證過程整合到每個演繹推理階段中,我們顯著增強了所生成推理步驟的嚴謹性和可信度。在這個過程中,我們還提高了對複雜推理任務的答案正確性。代碼將在 https://github.com/lz1oceani/verify_cot 上發布。
English
Large Language Models (LLMs) significantly benefit from Chain-of-Thought (CoT) prompting in performing various reasoning tasks. While CoT allows models to produce more comprehensive reasoning processes, its emphasis on intermediate reasoning steps can inadvertently introduce hallucinations and accumulated errors, thereby limiting models' ability to solve complex reasoning tasks. Inspired by how humans engage in careful and meticulous deductive logical reasoning processes to solve tasks, we seek to enable language models to perform explicit and rigorous deductive reasoning, and also ensure the trustworthiness of their reasoning process through self-verification. However, directly verifying the validity of an entire deductive reasoning process is challenging, even with advanced models like ChatGPT. In light of this, we propose to decompose a reasoning verification process into a series of step-by-step subprocesses, each only receiving their necessary context and premises. To facilitate this procedure, we propose Natural Program, a natural language-based deductive reasoning format. Our approach enables models to generate precise reasoning steps where subsequent steps are more rigorously grounded on prior steps. It also empowers language models to carry out reasoning self-verification in a step-by-step manner. By integrating this verification process into each deductive reasoning stage, we significantly enhance the rigor and trustfulness of generated reasoning steps. Along this process, we also improve the answer correctness on complex reasoning tasks. Code will be released at https://github.com/lz1oceani/verify_cot.
PDF60December 15, 2024