深思熟虑后生成:文本生成的增强提示框架
Deliberate then Generate: Enhanced Prompting Framework for Text Generation
May 31, 2023
作者: Bei Li, Rui Wang, Junliang Guo, Kaitao Song, Xu Tan, Hany Hassan, Arul Menezes, Tong Xiao, Jiang Bian, JingBo Zhu
cs.AI
摘要
大型语言模型(LLMs)在各种自然语言生成任务中展现出卓越的成功,适当的提示设计对其影响巨大。现有的提示方法通常局限于提供正确信息,但在本文中,我们鼓励模型通过提出一种新颖的“先思考后生成”(DTG)提示框架来进行深思熟虑,该框架包括错误检测指令和可能包含错误的候选项。DTG是一种简单而有效的技术,可以在进行最少修改的情况下应用于各种文本生成任务。我们在包括总结、翻译、对话等在内的7个文本生成任务上对20多个数据集进行了广泛实验。我们展示了DTG始终优于现有提示方法,并在多个文本生成任务上实现了最先进的性能。我们还进行了深入分析,揭示了DTG的潜在机制,这可能会激发对LLMs提示的未来研究。
English
Large language models (LLMs) have shown remarkable success across a wide
range of natural language generation tasks, where proper prompt designs make
great impacts. While existing prompting methods are normally restricted to
providing correct information, in this paper, we encourage the model to
deliberate by proposing a novel Deliberate then Generate (DTG) prompting
framework, which consists of error detection instructions and candidates that
may contain errors. DTG is a simple yet effective technique that can be applied
to various text generation tasks with minimal modifications. We conduct
extensive experiments on 20+ datasets across 7 text generation tasks, including
summarization, translation, dialogue, and more. We show that DTG consistently
outperforms existing prompting methods and achieves state-of-the-art
performance on multiple text generation tasks. We also provide in-depth
analyses to reveal the underlying mechanisms of DTG, which may inspire future
research on prompting for LLMs.