ChatPaper.aiChatPaper

链式推理中的忠实度测量

Measuring Faithfulness in Chain-of-Thought Reasoning

July 17, 2023
作者: Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez
cs.AI

摘要

大型语言模型(LLMs)在回答问题之前,如果它们能够进行“思维链”(CoT)推理,通常表现更好,但目前尚不清楚所陈述的推理是否忠实地解释了模型实际推理的过程(即回答问题的方式)。我们通过研究CoT推理可能不忠实的假设,来检验当我们对CoT进行干预时(例如,添加错误或改写),模型预测会如何变化。我们发现,模型在预测答案时在多大程度上依赖于CoT存在着很大的差异,有时会严重依赖于CoT,而其他时候则主要忽略它。CoT的性能提升似乎并非仅仅来自于CoT在测试时的计算量增加,也不是来自于CoT特定措辞所编码的信息。随着模型变得更大更强大,我们发现在我们研究的大多数任务中,模型产生的推理更不忠实。总体而言,我们的结果表明,如果选择模型大小和任务等特定情况,CoT可以是忠实的。
English
Large language models (LLMs) perform better when they produce step-by-step, "Chain-of-Thought" (CoT) reasoning before answering a question, but it is unclear if the stated reasoning is a faithful explanation of the model's actual reasoning (i.e., its process for answering the question). We investigate hypotheses for how CoT reasoning may be unfaithful, by examining how the model predictions change when we intervene on the CoT (e.g., by adding mistakes or paraphrasing it). Models show large variation across tasks in how strongly they condition on the CoT when predicting their answer, sometimes relying heavily on the CoT and other times primarily ignoring it. CoT's performance boost does not seem to come from CoT's added test-time compute alone or from information encoded via the particular phrasing of the CoT. As models become larger and more capable, they produce less faithful reasoning on most tasks we study. Overall, our results suggest that CoT can be faithful if the circumstances such as the model size and task are carefully chosen.
PDF281December 15, 2024