前提順序對於使用大型語言模型進行推理至關重要
Premise Order Matters in Reasoning with Large Language Models
February 14, 2024
作者: Xinyun Chen, Ryan A. Chi, Xuezhi Wang, Denny Zhou
cs.AI
摘要
大型語言模型(LLMs)在各個領域中取得了顯著的推理表現。然而,在推理任務領域中,我們發現了一個脆弱性:儘管這種排序並不改變基礎任務,但LLMs對前提的排序非常脆弱。特別是,我們觀察到當前提順序與中間推理步驟所需的上下文一致時,LLMs可以取得最佳表現。例如,在演繹推理任務中,將前提按照提示中的真實證明順序呈現(而非隨機排序)顯著提高了模型的準確性。我們首先研究了不同LLMs在演繹推理中前提排序的影響,我們的評估顯示,對前提排序進行排列組合可能導致性能下降超過30%。此外,我們釋出了基於GSM8K的基準R-GSM,以檢驗數學問題解決的排序效應,我們再次觀察到準確性明顯下降,相對於原始的GSM8K基準。
English
Large language models (LLMs) have accomplished remarkable reasoning
performance in various domains. However, in the domain of reasoning tasks, we
discover a frailty: LLMs are surprisingly brittle to the ordering of the
premises, despite the fact that such ordering does not alter the underlying
task. In particular, we observe that LLMs achieve the best performance when the
premise order aligns with the context required in intermediate reasoning steps.
For example, in deductive reasoning tasks, presenting the premises in the same
order as the ground truth proof in the prompt (as opposed to random ordering)
drastically increases the model's accuracy. We first examine the effect of
premise ordering on deductive reasoning on a variety of LLMs, and our
evaluation shows that permuting the premise order can cause a performance drop
of over 30%. In addition, we release the benchmark R-GSM, based on GSM8K, to
examine the ordering effect for mathematical problem-solving, and we again
observe a significant drop in accuracy, relative to the original GSM8K
benchmark.Summary
AI-Generated Summary