ChatPaper.aiChatPaper

元提示:利用與任務無關的支架增強語言模型

Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding

January 23, 2024
作者: Mirac Suzgun, Adam Tauman Kalai
cs.AI

摘要

我們介紹了元提示(meta-prompting),這是一種有效的輔助技術,旨在增強語言模型(LMs)的功能性。這種方法將單個LM轉變為一個多面手指揮者,擅長管理和整合多個獨立的LM查詢。通過使用高級指令,元提示引導LM將複雜任務分解為更小、更易管理的子任務。然後,這些子任務由同一LM的不同“專家”實例處理,每個實例都在特定的、量身定制的指導下運作。這個過程的核心是LM本身,作為指揮者,確保專家模型的輸出無縫溝通和有效整合。它還利用其固有的批判性思維和強大的驗證過程來完善和驗證最終結果。這種合作提示方法使單個LM能夠同時充當全面的指揮者和一組多樣化專家,顯著提高其在各種任務中的性能。元提示的零槍擊、任務不可知的特性大大簡化了用戶互動,無需詳細的、任務特定的指令。此外,我們的研究展示了外部工具(如Python解釋器)無縫集成到元提示框架中,從而擴大了其應用範圍和效用。通過對GPT-4進行嚴格實驗,我們確立了元提示相對於傳統輔助方法的優越性:在所有任務中取平均值,包括24點遊戲、一步將軍和Python編程謎題,元提示搭配Python解釋器功能,超越標準提示17.1%,專家(動態)提示17.3%,多人格提示15.2%。
English
We introduce meta-prompting, an effective scaffolding technique designed to enhance the functionality of language models (LMs). This approach transforms a single LM into a multi-faceted conductor, adept at managing and integrating multiple independent LM queries. By employing high-level instructions, meta-prompting guides the LM to break down complex tasks into smaller, more manageable subtasks. These subtasks are then handled by distinct "expert" instances of the same LM, each operating under specific, tailored instructions. Central to this process is the LM itself, in its role as the conductor, which ensures seamless communication and effective integration of the outputs from these expert models. It additionally employs its inherent critical thinking and robust verification processes to refine and authenticate the end result. This collaborative prompting approach empowers a single LM to simultaneously act as a comprehensive orchestrator and a panel of diverse experts, significantly enhancing its performance across a wide array of tasks. The zero-shot, task-agnostic nature of meta-prompting greatly simplifies user interaction by obviating the need for detailed, task-specific instructions. Furthermore, our research demonstrates the seamless integration of external tools, such as a Python interpreter, into the meta-prompting framework, thereby broadening its applicability and utility. Through rigorous experimentation with GPT-4, we establish the superiority of meta-prompting over conventional scaffolding methods: When averaged across all tasks, including the Game of 24, Checkmate-in-One, and Python Programming Puzzles, meta-prompting, augmented with a Python interpreter functionality, surpasses standard prompting by 17.1%, expert (dynamic) prompting by 17.3%, and multipersona prompting by 15.2%.
PDF325December 15, 2024