ChatPaper.aiChatPaper

對比式思維鏈提示

Contrastive Chain-of-Thought Prompting

November 15, 2023
作者: Yew Ken Chia, Guizhen Chen, Luu Anh Tuan, Soujanya Poria, Lidong Bing
cs.AI

摘要

儘管思維鏈在增強語言模型推理方面取得成功,但其基本過程仍不太清楚。儘管合乎邏輯的推理對於思維鏈似乎至關重要,但先前的研究驚人地發現,使用無效示範時影響微乎其微。此外,傳統的思維鏈並未告知語言模型應避免哪些錯誤,這可能導致更多錯誤。因此,受到人類如何從正面和負面示例中學習的啟發,我們提出了對比思維鏈以增強語言模型的推理能力。與傳統思維鏈相比,我們的方法提供有效和無效的推理示範,引導模型逐步進行推理,同時減少推理錯誤。為了提高泛化能力,我們引入了一種自動方法來構建對比示範。我們在推理基準測試上的實驗表明,對比思維鏈可以作為思維鏈提示的一般增強。
English
Despite the success of chain of thought in enhancing language model reasoning, the underlying process remains less well understood. Although logically sound reasoning appears inherently crucial for chain of thought, prior studies surprisingly reveal minimal impact when using invalid demonstrations instead. Furthermore, the conventional chain of thought does not inform language models on what mistakes to avoid, which potentially leads to more errors. Hence, inspired by how humans can learn from both positive and negative examples, we propose contrastive chain of thought to enhance language model reasoning. Compared to the conventional chain of thought, our approach provides both valid and invalid reasoning demonstrations, to guide the model to reason step-by-step while reducing reasoning mistakes. To improve generalization, we introduce an automatic method to construct contrastive demonstrations. Our experiments on reasoning benchmarks demonstrate that contrastive chain of thought can serve as a general enhancement of chain-of-thought prompting.
PDF364December 15, 2024