ChatPaper.aiChatPaper

最近邻推测解码用于LLM生成和归因

Nearest Neighbor Speculative Decoding for LLM Generation and Attribution

May 29, 2024
作者: Minghan Li, Xilun Chen, Ari Holtzman, Beidi Chen, Jimmy Lin, Wen-tau Yih, Xi Victoria Lin
cs.AI

摘要

大型语言模型(LLMs)经常会产生幻觉,缺乏为其生成提供归因的能力。kNN-LM等半参数模型通过在非参数数据存储中使用其最近邻匹配来优化给定提示的LM输出,以解决这些限制。然而,这些模型通常表现出较慢的推理速度,并生成不流畅的文本。本文介绍了最近邻推测解码(NEST),这是一种新颖的半参数语言建模方法,能够将任意长度的现实文本片段合并到LM生成中,并为其来源提供归因。NEST在每个推理步骤执行标记级检索,计算半参数混合分布,并在语料库中识别有前途的片段延续。然后,它使用一种近似的推测解码过程,接受检索到的片段的前缀或生成新的标记。NEST显著提高了基本LM在各种知识密集型任务中的生成质量和归因率,超越了传统的kNN-LM方法,并与上下文检索增强方法竞争性地表现。此外,NEST大幅提高了生成速度,在应用于Llama-2-Chat 70B时,推理时间实现了1.8倍的加速。
English
Large language models (LLMs) often hallucinate and lack the ability to provide attribution for their generations. Semi-parametric LMs, such as kNN-LM, approach these limitations by refining the output of an LM for a given prompt using its nearest neighbor matches in a non-parametric data store. However, these models often exhibit slow inference speeds and produce non-fluent texts. In this paper, we introduce Nearest Neighbor Speculative Decoding (NEST), a novel semi-parametric language modeling approach that is capable of incorporating real-world text spans of arbitrary length into the LM generations and providing attribution to their sources. NEST performs token-level retrieval at each inference step to compute a semi-parametric mixture distribution and identify promising span continuations in a corpus. It then uses an approximate speculative decoding procedure that accepts a prefix of the retrieved span or generates a new token. NEST significantly enhances the generation quality and attribution rate of the base LM across a variety of knowledge-intensive tasks, surpassing the conventional kNN-LM method and performing competitively with in-context retrieval augmentation. In addition, NEST substantially improves the generation speed, achieving a 1.8x speedup in inference time when applied to Llama-2-Chat 70B.

Summary

AI-Generated Summary

PDF140December 12, 2024