为何语言模型会产生幻觉
Why Language Models Hallucinate
September 4, 2025
作者: Adam Tauman Kalai, Ofir Nachum, Santosh S. Vempala, Edwin Zhang
cs.AI
摘要
如同面对难题的学生,大型语言模型在不确定时有时会进行猜测,产生看似合理实则错误的陈述,而非承认不确定性。这种“幻觉”现象即便在最先进的系统中依然存在,削弱了信任度。我们认为,语言模型之所以产生幻觉,是因为训练和评估过程奖励了猜测而非承认不确定性,我们分析了现代训练流程中幻觉的统计成因。幻觉并非神秘莫测——它们起源于二元分类中的简单错误。如果错误陈述无法与事实区分开来,那么预训练语言模型中的幻觉就会在自然统计压力下产生。我们进一步指出,幻觉之所以持续存在,是因为大多数评估方式的评分机制——语言模型被优化为擅长应试,在不确定时进行猜测能提高测试表现。这种“惩罚不确定回答”的“流行病”只能通过社会技术手段来缓解:调整现有基准的评分方式,这些基准虽存在偏差却主导着排行榜,而非引入额外的幻觉评估。这一改变或许能引导该领域迈向更为可信的人工智能系统。
English
Like students facing hard exam questions, large language models sometimes
guess when uncertain, producing plausible yet incorrect statements instead of
admitting uncertainty. Such "hallucinations" persist even in state-of-the-art
systems and undermine trust. We argue that language models hallucinate because
the training and evaluation procedures reward guessing over acknowledging
uncertainty, and we analyze the statistical causes of hallucinations in the
modern training pipeline. Hallucinations need not be mysterious -- they
originate simply as errors in binary classification. If incorrect statements
cannot be distinguished from facts, then hallucinations in pretrained language
models will arise through natural statistical pressures. We then argue that
hallucinations persist due to the way most evaluations are graded -- language
models are optimized to be good test-takers, and guessing when uncertain
improves test performance. This "epidemic" of penalizing uncertain responses
can only be addressed through a socio-technical mitigation: modifying the
scoring of existing benchmarks that are misaligned but dominate leaderboards,
rather than introducing additional hallucination evaluations. This change may
steer the field toward more trustworthy AI systems.