ChatPaper.aiChatPaper

FactAlign:大型语言模型的长文本事实对齐

FactAlign: Long-form Factuality Alignment of Large Language Models

October 2, 2024
作者: Chao-Wei Huang, Yun-Nung Chen
cs.AI

摘要

大型语言模型展示了作为下一代信息访问引擎的重要潜力。然而,它们的可靠性受到幻觉和生成非事实内容的问题的影响。这在长篇回复中尤为棘手,因为评估和确保事实准确性是复杂的。本文通过提出FactAlign来填补这一空白,这是一个旨在增强LLM长篇回复事实性的新型对齐框架,同时保持其实用性。我们引入fKTO,这是一个细粒度、句子级对齐算法,扩展了Kahneman-Tversky Optimization(KTO)对齐方法。FactAlign利用最近自动事实评估的进展,利用细粒度事实评估来指导对齐过程。我们在开放域提示和信息检索问题上的实验表明,FactAlign显著提高了LLM回复的事实准确性,同时也提高了其实用性。进一步的分析表明,FactAlign能够训练LLM提供更多信息,而不失去事实精度,从而提高事实F1分数。我们的源代码、数据集和训练模型可在https://github.com/MiuLab/FactAlign 上公开获取。
English
Large language models have demonstrated significant potential as the next-generation information access engines. However, their reliability is hindered by issues of hallucination and generating non-factual content. This is particularly problematic in long-form responses, where assessing and ensuring factual accuracy is complex. In this paper, we address this gap by proposing FactAlign, a novel alignment framework designed to enhance the factuality of LLMs' long-form responses while maintaining their helpfulness. We introduce fKTO, a fine-grained, sentence-level alignment algorithm that extends the Kahneman-Tversky Optimization (KTO) alignment method. Leveraging recent advances in automatic factuality evaluation, FactAlign utilizes fine-grained factuality assessments to guide the alignment process. Our experiments on open-domain prompts and information-seeking questions demonstrate that FactAlign significantly improves the factual accuracy of LLM responses while also improving their helpfulness. Further analyses identify that FactAlign is capable of training LLMs to provide more information without losing factual precision, thus improving the factual F1 score. Our source code, datasets, and trained models are publicly available at https://github.com/MiuLab/FactAlign

Summary

AI-Generated Summary

PDF92November 16, 2024