SQuARE:用於增強大型語言模型中的思維連鎖的順序問答推理引擎
SQuARE: Sequential Question Answering Reasoning Engine for Enhanced Chain-of-Thought in Large Language Models
February 13, 2025
作者: Daniel Fleischer, Moshe Berchansky, Gad Markovits, Moshe Wasserblat
cs.AI
摘要
在快速發展的自然語言處理領域中,大型語言模型(LLMs)被賦予越來越複雜的推理挑戰。傳統方法如思維鏈提示顯示出潛力,但往往無法充分利用模型的推理能力。本文介紹了SQuARE(Sequential Question Answering Reasoning Engine),一種旨在通過自我質詢範式改進推理的新型提示技術。在CoT框架的基礎上,SQuARE提示模型在處理主要查詢之前生成和解決多個輔助問題,促進對主題各個方面的更全面探索。我們使用Llama 3和GPT-4o模型在多個問答數據集上進行了廣泛評估,結果顯示SQuARE明顯優於傳統的CoT提示和現有的重述和回答方法。通過系統地分解查詢,SQuARE提升了LLM在推理任務中的能力。代碼可在https://github.com/IntelLabs/RAG-FiT/tree/square 公開獲取。
English
In the rapidly evolving field of Natural Language Processing, Large Language
Models (LLMs) are tasked with increasingly complex reasoning challenges.
Traditional methods like chain-of-thought prompting have shown promise but
often fall short in fully leveraging a model's reasoning capabilities. This
paper introduces SQuARE (Sequential Question Answering Reasoning Engine), a
novel prompting technique designed to improve reasoning through a
self-interrogation paradigm. Building upon CoT frameworks, SQuARE prompts
models to generate and resolve multiple auxiliary questions before tackling the
main query, promoting a more thorough exploration of various aspects of a
topic. Our expansive evaluations, conducted with Llama 3 and GPT-4o models
across multiple question-answering datasets, demonstrate that SQuARE
significantly surpasses traditional CoT prompts and existing
rephrase-and-respond methods. By systematically decomposing queries, SQuARE
advances LLM capabilities in reasoning tasks. The code is publicly available at
https://github.com/IntelLabs/RAG-FiT/tree/square.Summary
AI-Generated Summary