ChatPaper.aiChatPaper

大型语言模型中的长篇事实性

Long-form factuality in large language models

March 27, 2024
作者: Jerry Wei, Chengrun Yang, Xinying Song, Yifeng Lu, Nathan Hu, Dustin Tran, Daiyi Peng, Ruibo Liu, Da Huang, Cosmo Du, Quoc V. Le
cs.AI

摘要

大型语言模型(LLMs)在回答开放性主题的事实查询提示时,往往会生成包含事实错误的内容。为了在开放领域中对模型的长篇事实性进行基准测试,我们首先使用 GPT-4 生成了 LongFact,这是一个包含数千个问题涵盖 38 个主题的提示集。然后,我们提出LLM代理可以通过一种名为搜索增强事实性评估器(SAFE)的方法作为长篇事实性的自动评估器。SAFE利用LLM将长篇回复分解为一组单独的事实,并通过一个多步推理过程来评估每个事实的准确性,其中包括向谷歌搜索发送搜索查询并确定一个事实是否得到搜索结果的支持。此外,我们提议将F1分数扩展为长篇事实性的聚合度量。为此,我们平衡了回复中受支持事实的百分比(精确度)与提供的事实百分比相对于代表用户首选回复长度的超参数(召回率)。 在经验上,我们证明LLM代理可以实现超人类的评级表现 - 在约16k个单独事实集中,SAFE与众包人工标注者达成一致的时间为72%,在100个不一致案例的随机子集中,SAFE的胜率为76%。同时,SAFE比人工标注者便宜超过20倍。我们还在LongFact上对十三个语言模型进行了基准测试,涵盖了四个模型系列(Gemini,GPT,Claude和PaLM-2),发现更大型的语言模型通常实现更好的长篇事实性。LongFact,SAFE以及所有实验代码均可在 https://github.com/google-deepmind/long-form-factuality 上找到。
English
Large language models (LLMs) often generate content that contains factual errors when responding to fact-seeking prompts on open-ended topics. To benchmark a model's long-form factuality in open domains, we first use GPT-4 to generate LongFact, a prompt set comprising thousands of questions spanning 38 topics. We then propose that LLM agents can be used as automated evaluators for long-form factuality through a method which we call Search-Augmented Factuality Evaluator (SAFE). SAFE utilizes an LLM to break down a long-form response into a set of individual facts and to evaluate the accuracy of each fact using a multi-step reasoning process comprising sending search queries to Google Search and determining whether a fact is supported by the search results. Furthermore, we propose extending F1 score as an aggregated metric for long-form factuality. To do so, we balance the percentage of supported facts in a response (precision) with the percentage of provided facts relative to a hyperparameter representing a user's preferred response length (recall). Empirically, we demonstrate that LLM agents can achieve superhuman rating performance - on a set of ~16k individual facts, SAFE agrees with crowdsourced human annotators 72% of the time, and on a random subset of 100 disagreement cases, SAFE wins 76% of the time. At the same time, SAFE is more than 20 times cheaper than human annotators. We also benchmark thirteen language models on LongFact across four model families (Gemini, GPT, Claude, and PaLM-2), finding that larger language models generally achieve better long-form factuality. LongFact, SAFE, and all experimental code are available at https://github.com/google-deepmind/long-form-factuality.

Summary

AI-Generated Summary

PDF262December 15, 2024