在知识图谱上训练语言模型:关于幻觉及其可检测性的见解
Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability
August 14, 2024
作者: Jiri Hron, Laura Culp, Gamaleldin Elsayed, Rosanne Liu, Ben Adlam, Maxwell Bileschi, Bernd Bohnet, JD Co-Reyes, Noah Fiedel, C. Daniel Freeman, Izzeddin Gur, Kathleen Kenealy, Jaehoon Lee, Peter J. Liu, Gaurav Mishra, Igor Mordatch, Azade Nova, Roman Novak, Aaron Parisi, Jeffrey Pennington, Alex Rizkowsky, Isabelle Simpson, Hanie Sedghi, Jascha Sohl-dickstein, Kevin Swersky, Sharad Vikram, Tris Warkentin, Lechao Xiao, Kelvin Xu, Jasper Snoek, Simon Kornblith
cs.AI
摘要
尽管语言模型(LMs)的许多能力随着训练预算的增加而提高,但规模对幻觉的影响尚未完全被理解。幻觉有许多形式,并且没有被普遍接受的定义。因此,我们专注于研究只有在训练集中完全正确答案出现的那些幻觉。为了充分控制训练数据内容,我们构建了基于知识图谱(KG)的数据集,并用它来训练一组规模逐渐增大的LMs。我们发现,对于固定的数据集,规模更大、训练时间更长的LMs产生的幻觉较少。然而,在训练数据的不到5%上产生幻觉需要比Hoffmann等人(2022年)报告的最佳模型规模大一个数量级,因此需要更多的计算资源。考虑到这种昂贵性,我们研究了幻觉检测器如何依赖规模。虽然我们发现检测器的规模提高了固定LMs输出的性能,但我们发现LM规模与其幻觉的可检测性之间存在反比关系。
English
While many capabilities of language models (LMs) improve with increased
training budget, the influence of scale on hallucinations is not yet fully
understood. Hallucinations come in many forms, and there is no universally
accepted definition. We thus focus on studying only those hallucinations where
a correct answer appears verbatim in the training set. To fully control the
training data content, we construct a knowledge graph (KG)-based dataset, and
use it to train a set of increasingly large LMs. We find that for a fixed
dataset, larger and longer-trained LMs hallucinate less. However, hallucinating
on leq5% of the training data requires an order of magnitude larger model,
and thus an order of magnitude more compute, than Hoffmann et al. (2022)
reported was optimal. Given this costliness, we study how hallucination
detectors depend on scale. While we see detector size improves performance on
fixed LM's outputs, we find an inverse relationship between the scale of the LM
and the detectability of its hallucinations.Summary
AI-Generated Summary