ChatPaper.aiChatPaper

问题分解提高了模型生成推理的忠实度

Question Decomposition Improves the Faithfulness of Model-Generated Reasoning

July 17, 2023
作者: Ansh Radhakrishnan, Karina Nguyen, Anna Chen, Carol Chen, Carson Denison, Danny Hernandez, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Sam McCandlish, Sheer El Showk, Tamera Lanham, Tim Maxwell, Venkatesa Chandrasekaran, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez
cs.AI

摘要

随着大型语言模型(LLMs)执行更加复杂的任务,验证其行为的正确性和安全性变得更加困难。解决这一问题的一种方法是促使LLMs将推理过程外化,例如,让它们在回答问题时生成逐步推理(思维链;CoT)。推理过程可以帮助我们检查模型执行任务所使用的过程。然而,这种方法依赖于所陈述的推理是否忠实地反映了模型的实际推理,而这并非总是成立。为了提高CoT推理的忠实度,我们让模型通过将问题分解为子问题来生成推理。基于分解的方法在问答任务上取得了很好的表现,有时接近于CoT的表现,同时提高了模型在几个最近提出的指标上所陈述推理的忠实度。通过强制模型在不同的上下文中回答更简单的子问题,我们大大提高了模型生成推理的忠实度,同时仍然实现了部分CoT的性能增益。我们的结果表明,有可能提高模型生成推理的忠实度;持续改进可能会导致推理,从而使我们能够验证LLM行为的正确性和安全性。
English
As large language models (LLMs) perform more difficult tasks, it becomes harder to verify the correctness and safety of their behavior. One approach to help with this issue is to prompt LLMs to externalize their reasoning, e.g., by having them generate step-by-step reasoning as they answer a question (Chain-of-Thought; CoT). The reasoning may enable us to check the process that models use to perform tasks. However, this approach relies on the stated reasoning faithfully reflecting the model's actual reasoning, which is not always the case. To improve over the faithfulness of CoT reasoning, we have models generate reasoning by decomposing questions into subquestions. Decomposition-based methods achieve strong performance on question-answering tasks, sometimes approaching that of CoT while improving the faithfulness of the model's stated reasoning on several recently-proposed metrics. By forcing the model to answer simpler subquestions in separate contexts, we greatly increase the faithfulness of model-generated reasoning over CoT, while still achieving some of the performance gains of CoT. Our results show it is possible to improve the faithfulness of model-generated reasoning; continued improvements may lead to reasoning that enables us to verify the correctness and safety of LLM behavior.
PDF130December 15, 2024