ChatPaper.aiChatPaper

运用许可预言机抑制语言模型幻觉

Stemming Hallucination in Language Models Using a Licensing Oracle

November 8, 2025
作者: Simeon Emanuilov, Richard Ackermann
cs.AI

摘要

语言模型展现出卓越的自然语言生成能力,但仍易产生幻觉现象——尽管能生成语法连贯的响应,却时常输出事实性错误信息。本研究提出许可验证器(Licensing Oracle),该架构方案通过基于结构化知识图谱的形式化验证来实施真实性约束,从而从根源上抑制语言模型的幻觉生成。与依赖数据扩展或微调的统计方法不同,许可验证器在模型生成过程中嵌入了确定性验证步骤,确保仅产生事实准确的论断。我们通过对比实验评估了许可验证器的有效性,参比方法包括基线语言模型生成、事实召回微调、弃答行为微调以及检索增强生成(RAG)。实验结果表明:尽管RAG与微调能提升性能,但均无法完全消除幻觉;而许可验证器实现了完美的弃答精度(AP=1.0)和零错误答案率(FAR-NE=0.0),在事实性响应中以89.1%的准确率确保只生成有效论断。这项工作证明,对于具有结构化知识表示的领域,许可验证器这类架构创新为消除幻觉提供了充分必要的解决方案,其保障效果是统计方法无法企及的。虽然许可验证器专为解决事实性领域的幻觉问题而设计,但其框架为未来AI系统的真实性约束生成奠定了基础,开辟了构建可靠且具有认知根基模型的新路径。
English
Language models exhibit remarkable natural language generation capabilities but remain prone to hallucinations, generating factually incorrect information despite producing syntactically coherent responses. This study introduces the Licensing Oracle, an architectural solution designed to stem hallucinations in LMs by enforcing truth constraints through formal validation against structured knowledge graphs. Unlike statistical approaches that rely on data scaling or fine-tuning, the Licensing Oracle embeds a deterministic validation step into the model's generative process, ensuring that only factually accurate claims are made. We evaluated the effectiveness of the Licensing Oracle through experiments comparing it with several state-of-the-art methods, including baseline language model generation, fine-tuning for factual recall, fine-tuning for abstention behavior, and retrieval-augmented generation (RAG). Our results demonstrate that although RAG and fine-tuning improve performance, they fail to eliminate hallucinations. In contrast, the Licensing Oracle achieved perfect abstention precision (AP = 1.0) and zero false answers (FAR-NE = 0.0), ensuring that only valid claims were generated with 89.1% accuracy in factual responses. This work shows that architectural innovations, such as the Licensing Oracle, offer a necessary and sufficient solution for hallucinations in domains with structured knowledge representations, offering guarantees that statistical methods cannot match. Although the Licensing Oracle is specifically designed to address hallucinations in fact-based domains, its framework lays the groundwork for truth-constrained generation in future AI systems, providing a new path toward reliable, epistemically grounded models.
PDF12December 1, 2025