共识游戏:通过均衡搜索进行语言模型生成
The Consensus Game: Language Model Generation via Equilibrium Search
October 13, 2023
作者: Athul Paul Jacob, Yikang Shen, Gabriele Farina, Jacob Andreas
cs.AI
摘要
当应用于问答和其他文本生成任务时,语言模型(LMs)可以通过生成式查询(从其输出分布中抽样答案)或判别式查询(使用它们对一组候选输出进行评分或排名)。这些过程有时会产生非常不同的预测。我们如何调和相互不兼容的评分程序,以获得连贯的LM预测?我们引入了一种新的、无需训练的、博弈论程序用于语言模型解码。我们的方法将语言模型解码视为一种正则化的不完全信息序贯信号博弈 - 我们称之为共识博弈 - 在这个博弈中,生成器试图使用自然语言句子向判别器传达一个抽象的正确性参数。我们开发了计算程序来找到该博弈的近似均衡,从而得到一种我们称之为均衡排序的解码算法。将均衡排序应用于大量任务(包括阅读理解、常识推理、数学问题解决和对话),均衡排序一直且有时显著地改善了现有LM解码程序的性能 - 在多个基准测试中,我们观察到将均衡排序应用于LLaMA-7B比LLaMA-65B和PaLM-540B模型表现更好。这些结果突显了博弈论工具在解决LM的真实性和一致性等基本挑战方面的潜力。
English
When applied to question answering and other text generation tasks, language
models (LMs) may be queried generatively (by sampling answers from their output
distribution) or discriminatively (by using them to score or rank a set of
candidate outputs). These procedures sometimes yield very different
predictions. How do we reconcile mutually incompatible scoring procedures to
obtain coherent LM predictions? We introduce a new, a training-free,
game-theoretic procedure for language model decoding. Our approach casts
language model decoding as a regularized imperfect-information sequential
signaling game - which we term the CONSENSUS GAME - in which a GENERATOR seeks
to communicate an abstract correctness parameter using natural language
sentences to a DISCRIMINATOR. We develop computational procedures for finding
approximate equilibria of this game, resulting in a decoding algorithm we call
EQUILIBRIUM-RANKING. Applied to a large number of tasks (including reading
comprehension, commonsense reasoning, mathematical problem-solving, and
dialog), EQUILIBRIUM-RANKING consistently, and sometimes substantially,
improves performance over existing LM decoding procedures - on multiple
benchmarks, we observe that applying EQUILIBRIUM-RANKING to LLaMA-7B
outperforms the much larger LLaMA-65B and PaLM-540B models. These results
highlight the promise of game-theoretic tools for addressing fundamental
challenges of truthfulness and consistency in LMs.