ChatPaper.aiChatPaper

DoLa:通过对比层解码改善大型语言模型中的事实性

DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models

September 7, 2023
作者: Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James Glass, Pengcheng He
cs.AI

摘要

尽管大型语言模型(LLMs)具有令人印象深刻的能力,但容易出现幻觉,即生成与预训练期间观察到的事实偏离的内容。我们提出了一种简单的解码策略,用于减少预训练的LLMs中的幻觉,不需要对检索的外部知识进行调节,也不需要额外的微调。我们的方法通过对比从将后期层与较早层投影到词汇空间中获得的logits之间的差异来获得下一个标记的分布,利用了LLMs中的事实知识通常被显示为局部化到特定的Transformer层的事实。我们发现,这种通过对比层(DoLa)的方法能够更好地展现事实知识并减少生成不正确事实的情况。DoLa在多选任务和开放式生成任务中始终提高了真实性,例如在TruthfulQA上将LLaMA系列模型的性能提高了12-17个绝对百分点,展示了其在使LLMs可靠地生成真实事实方面的潜力。
English
Despite their impressive capabilities, large language models (LLMs) are prone to hallucinations, i.e., generating content that deviates from facts seen during pretraining. We propose a simple decoding strategy for reducing hallucinations with pretrained LLMs that does not require conditioning on retrieved external knowledge nor additional fine-tuning. Our approach obtains the next-token distribution by contrasting the differences in logits obtained from projecting the later layers versus earlier layers to the vocabulary space, exploiting the fact that factual knowledge in an LLMs has generally been shown to be localized to particular transformer layers. We find that this Decoding by Contrasting Layers (DoLa) approach is able to better surface factual knowledge and reduce the generation of incorrect facts. DoLa consistently improves the truthfulness across multiple choices tasks and open-ended generation tasks, for example improving the performance of LLaMA family models on TruthfulQA by 12-17% absolute points, demonstrating its potential in making LLMs reliably generate truthful facts.
PDF354December 15, 2024