最近鄰居推測解碼用於語言模型生成和歸因
Nearest Neighbor Speculative Decoding for LLM Generation and Attribution
May 29, 2024
作者: Minghan Li, Xilun Chen, Ari Holtzman, Beidi Chen, Jimmy Lin, Wen-tau Yih, Xi Victoria Lin
cs.AI
摘要
大型語言模型(LLMs)常常出現幻覺並且缺乏為其生成提供歸因的能力。半參數語言模型,如kNN-LM,通過在非參數數據存儲中使用其最近鄰匹配來改進對於給定提示的LM的輸出,以應對這些限制。然而,這些模型通常表現出較慢的推理速度並生成不流暢的文本。本文介紹了最近鄰推測解碼(NEST),這是一種新穎的半參數語言建模方法,能夠將任意長度的現實文本範圍納入LM生成中並為其來源提供歸因。NEST在每個推理步驟進行基於標記的檢索,以計算半參數混合分佈並識別語料庫中有前途的範圍延續。然後,它使用一個接受檢索範圍前綴或生成新標記的近似推測解碼過程。NEST顯著提高了基本LM在各種知識密集型任務中的生成質量和歸因率,超越了傳統的kNN-LM方法,並與上下文檢索增強方法競爭。此外,NEST大幅提高了生成速度,在應用於Llama-2-Chat 70B 時實現了1.8倍的推理時間加速。
English
Large language models (LLMs) often hallucinate and lack the ability to
provide attribution for their generations. Semi-parametric LMs, such as kNN-LM,
approach these limitations by refining the output of an LM for a given prompt
using its nearest neighbor matches in a non-parametric data store. However,
these models often exhibit slow inference speeds and produce non-fluent texts.
In this paper, we introduce Nearest Neighbor Speculative Decoding (NEST), a
novel semi-parametric language modeling approach that is capable of
incorporating real-world text spans of arbitrary length into the LM generations
and providing attribution to their sources. NEST performs token-level retrieval
at each inference step to compute a semi-parametric mixture distribution and
identify promising span continuations in a corpus. It then uses an approximate
speculative decoding procedure that accepts a prefix of the retrieved span or
generates a new token. NEST significantly enhances the generation quality and
attribution rate of the base LM across a variety of knowledge-intensive tasks,
surpassing the conventional kNN-LM method and performing competitively with
in-context retrieval augmentation. In addition, NEST substantially improves the
generation speed, achieving a 1.8x speedup in inference time when applied to
Llama-2-Chat 70B.Summary
AI-Generated Summary