在知識圖譜上訓練語言模型:對幻覺及其可檢測性的洞察
Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability
August 14, 2024
作者: Jiri Hron, Laura Culp, Gamaleldin Elsayed, Rosanne Liu, Ben Adlam, Maxwell Bileschi, Bernd Bohnet, JD Co-Reyes, Noah Fiedel, C. Daniel Freeman, Izzeddin Gur, Kathleen Kenealy, Jaehoon Lee, Peter J. Liu, Gaurav Mishra, Igor Mordatch, Azade Nova, Roman Novak, Aaron Parisi, Jeffrey Pennington, Alex Rizkowsky, Isabelle Simpson, Hanie Sedghi, Jascha Sohl-dickstein, Kevin Swersky, Sharad Vikram, Tris Warkentin, Lechao Xiao, Kelvin Xu, Jasper Snoek, Simon Kornblith
cs.AI
摘要
儘管語言模型(LMs)的許多能力隨著訓練預算的增加而提高,但規模對幻覺的影響尚未完全被理解。幻覺呈現多種形式,並沒有被普遍接受的定義。因此,我們專注於研究只有在訓練集中以逐字逐句方式出現正確答案的幻覺。為了完全控制訓練數據的內容,我們建立了基於知識圖譜(KG)的數據集,並用它來訓練一組規模越來越大的LMs。我們發現對於固定的數據集,規模更大且訓練時間更長的LMs幻覺較少。然而,在訓練數據中幻覺低於5%所需的模型規模比Hoffmann等人(2022年)報告的最佳模型規模大一個數量級,因此需要更多的計算。考慮到這種昂貴性,我們研究了幻覺檢測器如何依賴規模。雖然我們看到檢測器的大小提高了對固定LMs輸出的性能,但我們發現LM的規模與其幻覺的可檢測性之間存在反比關係。
English
While many capabilities of language models (LMs) improve with increased
training budget, the influence of scale on hallucinations is not yet fully
understood. Hallucinations come in many forms, and there is no universally
accepted definition. We thus focus on studying only those hallucinations where
a correct answer appears verbatim in the training set. To fully control the
training data content, we construct a knowledge graph (KG)-based dataset, and
use it to train a set of increasingly large LMs. We find that for a fixed
dataset, larger and longer-trained LMs hallucinate less. However, hallucinating
on leq5% of the training data requires an order of magnitude larger model,
and thus an order of magnitude more compute, than Hoffmann et al. (2022)
reported was optimal. Given this costliness, we study how hallucination
detectors depend on scale. While we see detector size improves performance on
fixed LM's outputs, we find an inverse relationship between the scale of the LM
and the detectability of its hallucinations.Summary
AI-Generated Summary