ChatPaper.aiChatPaper

基於大語言模型解釋的神經符號推理之完備性與可靠性

Sound and Complete Neuro-symbolic Reasoning with LLM-Grounded Interpretations

July 13, 2025
作者: Bradley P. Allen, Prateek Chhikara, Thomas Macaulay Ferguson, Filip Ilievski, Paul Groth
cs.AI

摘要

大型語言模型(LLMs)在自然語言理解與生成方面展現了令人矚目的能力,但其生成的輸出在邏輯一致性上存在問題。儘管LLMs存在不一致性,我們如何能在形式推理中利用其廣泛覆蓋的參數化知識?我們提出了一種方法,直接將LLM整合到一種次協調邏輯的形式語義解釋函數中。通過使用基於多個短篇事實性基準創建的數據集對該函數進行評估,我們提供了該方法可行性的實驗證據。與先前的研究不同,我們的方法提供了一個理論框架,用於神經符號推理,該框架在利用LLM知識的同時,保持了底層邏輯的健全性和完備性。
English
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but they exhibit problems with logical consistency in the output they generate. How can we harness LLMs' broad-coverage parametric knowledge in formal reasoning despite their inconsistency? We present a method for directly integrating an LLM into the interpretation function of the formal semantics for a paraconsistent logic. We provide experimental evidence for the feasibility of the method by evaluating the function using datasets created from several short-form factuality benchmarks. Unlike prior work, our method offers a theoretical framework for neuro-symbolic reasoning that leverages an LLM's knowledge while preserving the underlying logic's soundness and completeness properties.
PDF01July 15, 2025