ChatPaper.aiChatPaper

FLAME:面向大语言模型的事实感知对齐方法

FLAME: Factuality-Aware Alignment for Large Language Models

May 2, 2024
作者: Sheng-Chieh Lin, Luyu Gao, Barlas Oguz, Wenhan Xiong, Jimmy Lin, Wen-tau Yih, Xilun Chen
cs.AI

摘要

对齐是一种标准流程,旨在对预训练大语言模型进行微调,使其能够遵循自然语言指令并作为有用的AI助手。然而我们发现,传统对齐过程不仅无法提升大语言模型的事实准确性,反而经常导致更多虚假事实的生成(即幻觉现象)。本文通过系统分析对齐过程中两个关键阶段——监督微调与强化学习——导致幻觉产生的因素,探索如何增强大语言模型对齐过程的事实性。研究发现,在模型未掌握的新知识或陌生文本上进行训练会加剧幻觉现象。这使得监督微调的事实性降低,因为其训练所用的人工标注数据可能包含模型未曾接触的内容。此外,标准强化学习使用的奖励函数也会助长幻觉,因其倾向于引导模型对多样化指令生成更具帮助性的回复,往往偏好更冗长详尽的响应。基于这些发现,我们提出事实感知对齐方法,包含通过直接偏好优化实现的事实感知监督微调与事实感知强化学习。实验表明,我们提出的事实感知对齐方法能在保持指令遵循能力的同时,有效引导大语言模型输出更具事实依据的响应。
English
Alignment is a standard procedure to fine-tune pre-trained large language models (LLMs) to follow natural language instructions and serve as helpful AI assistants. We have observed, however, that the conventional alignment process fails to enhance the factual accuracy of LLMs, and often leads to the generation of more false facts (i.e. hallucination). In this paper, we study how to make the LLM alignment process more factual, by first identifying factors that lead to hallucination in both alignment steps:\ supervised fine-tuning (SFT) and reinforcement learning (RL). In particular, we find that training the LLM on new knowledge or unfamiliar texts can encourage hallucination. This makes SFT less factual as it trains on human labeled data that may be novel to the LLM. Furthermore, reward functions used in standard RL can also encourage hallucination, because it guides the LLM to provide more helpful responses on a diverse set of instructions, often preferring longer and more detailed responses. Based on these observations, we propose factuality-aware alignment, comprised of factuality-aware SFT and factuality-aware RL through direct preference optimization. Experiments show that our proposed factuality-aware alignment guides LLMs to output more factual responses while maintaining instruction-following capability.
PDF291February 8, 2026