ChatPaper.aiChatPaper

大型語言模型中的長篇事實性

Long-form factuality in large language models

March 27, 2024
作者: Jerry Wei, Chengrun Yang, Xinying Song, Yifeng Lu, Nathan Hu, Dustin Tran, Daiyi Peng, Ruibo Liu, Da Huang, Cosmo Du, Quoc V. Le
cs.AI

摘要

大型語言模型(LLMs)在回答開放性主題的事實尋求提示時,往往會生成包含事實錯誤的內容。為了在開放領域中評估模型的長篇事實性,我們首先使用 GPT-4 生成 LongFact,這是一組包含數千個問題跨越 38 個主題的提示。然後,我們提出LLM代理可以通過一種我們稱之為「Search-Augmented Factuality Evaluator(SAFE)」的方法來用作長篇事實性的自動評估器。SAFE利用LLM將長篇回應拆分為一組個別事實,並通過一個多步推理過程來評估每個事實的準確性,其中包括向Google搜索發送搜索查詢並確定一個事實是否得到搜索結果的支持。此外,我們提出擴展F1分數作為長篇事實性的綜合指標。為此,我們平衡回應中受支持事實的百分比(精確度)與相對於代表用戶首選回應長度的超參數提供事實的百分比(召回率)。 通過實證,我們展示LLM代理可以實現超人類的評分表現-在約16,000個個別事實的一組中,SAFE與眾包人工標註者達成一致的時間為72%,在100個不一致案例的隨機子集中,SAFE贏得76%的時間。同時,SAFE比人工標註者便宜超過20倍。我們還在LongFact上對十三個語言模型進行基準測試,涵蓋四個模型系列(Gemini、GPT、Claude和PaLM-2),發現較大的語言模型通常實現更好的長篇事實性。LongFact、SAFE和所有實驗代碼均可在https://github.com/google-deepmind/long-form-factuality 上找到。
English
Large language models (LLMs) often generate content that contains factual errors when responding to fact-seeking prompts on open-ended topics. To benchmark a model's long-form factuality in open domains, we first use GPT-4 to generate LongFact, a prompt set comprising thousands of questions spanning 38 topics. We then propose that LLM agents can be used as automated evaluators for long-form factuality through a method which we call Search-Augmented Factuality Evaluator (SAFE). SAFE utilizes an LLM to break down a long-form response into a set of individual facts and to evaluate the accuracy of each fact using a multi-step reasoning process comprising sending search queries to Google Search and determining whether a fact is supported by the search results. Furthermore, we propose extending F1 score as an aggregated metric for long-form factuality. To do so, we balance the percentage of supported facts in a response (precision) with the percentage of provided facts relative to a hyperparameter representing a user's preferred response length (recall). Empirically, we demonstrate that LLM agents can achieve superhuman rating performance - on a set of ~16k individual facts, SAFE agrees with crowdsourced human annotators 72% of the time, and on a random subset of 100 disagreement cases, SAFE wins 76% of the time. At the same time, SAFE is more than 20 times cheaper than human annotators. We also benchmark thirteen language models on LongFact across four model families (Gemini, GPT, Claude, and PaLM-2), finding that larger language models generally achieve better long-form factuality. LongFact, SAFE, and all experimental code are available at https://github.com/google-deepmind/long-form-factuality.

Summary

AI-Generated Summary

PDF262December 15, 2024