基于大语言模型解释的完备神经符号推理
Sound and Complete Neuro-symbolic Reasoning with LLM-Grounded Interpretations
July 13, 2025
作者: Bradley P. Allen, Prateek Chhikara, Thomas Macaulay Ferguson, Filip Ilievski, Paul Groth
cs.AI
摘要
大型语言模型(LLMs)在自然语言理解和生成方面展现了令人瞩目的能力,但其生成的输出在逻辑一致性上存在问题。我们如何在利用LLMs广泛覆盖的参数化知识进行形式推理的同时,克服其不一致性?我们提出了一种方法,将LLM直接整合到一种次协调逻辑的形式语义解释函数中。通过使用从多个短篇事实性基准创建的数据集对该函数进行评估,我们提供了该方法可行性的实验证据。与先前工作不同,我们的方法为神经符号推理提供了一个理论框架,该框架在利用LLM知识的同时,保持了底层逻辑的可靠性和完备性特性。
English
Large language models (LLMs) have demonstrated impressive capabilities in
natural language understanding and generation, but they exhibit problems with
logical consistency in the output they generate. How can we harness LLMs'
broad-coverage parametric knowledge in formal reasoning despite their
inconsistency? We present a method for directly integrating an LLM into the
interpretation function of the formal semantics for a paraconsistent logic. We
provide experimental evidence for the feasibility of the method by evaluating
the function using datasets created from several short-form factuality
benchmarks. Unlike prior work, our method offers a theoretical framework for
neuro-symbolic reasoning that leverages an LLM's knowledge while preserving the
underlying logic's soundness and completeness properties.