ChatPaper.aiChatPaper

为何语言模型会产生幻觉

Why Language Models Hallucinate

September 4, 2025
作者: Adam Tauman Kalai, Ofir Nachum, Santosh S. Vempala, Edwin Zhang
cs.AI

摘要

如同面对难题的学生,大型语言模型在不确定时也会猜测,产生看似合理实则错误的陈述,而非承认不确定性。这种“幻觉”现象即便在顶尖系统中依然存在,削弱了信任度。我们认为,语言模型之所以产生幻觉,是因为训练与评估流程奖励猜测而非承认不确定性,我们分析了现代训练流程中幻觉的统计成因。幻觉并非神秘莫测——它们源于二分类中的简单错误。若错误陈述无法与事实区分,预训练语言模型中的幻觉便会因自然统计压力而出现。我们进一步指出,幻觉之所以持续,是因为多数评估的评分方式——语言模型被优化为擅长应试,在不确定时猜测能提升测试表现。这种“惩罚不确定回答”的“流行病”,只能通过社会技术手段缓解:调整现有基准测试的评分标准,这些基准虽存在偏差却主导排行榜,而非引入额外的幻觉评估。这一改变或许能引导领域迈向更可信的AI系统。
English
Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such "hallucinations" persist even in state-of-the-art systems and undermine trust. We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty, and we analyze the statistical causes of hallucinations in the modern training pipeline. Hallucinations need not be mysterious -- they originate simply as errors in binary classification. If incorrect statements cannot be distinguished from facts, then hallucinations in pretrained language models will arise through natural statistical pressures. We then argue that hallucinations persist due to the way most evaluations are graded -- language models are optimized to be good test-takers, and guessing when uncertain improves test performance. This "epidemic" of penalizing uncertain responses can only be addressed through a socio-technical mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate leaderboards, rather than introducing additional hallucination evaluations. This change may steer the field toward more trustworthy AI systems.
PDF1558September 8, 2025