ChatPaper.aiChatPaper

知识链:通过从知识图谱中学习将知识推理整合到大型语言模型中

Chain-of-Knowledge: Integrating Knowledge Reasoning into Large Language Models by Learning from Knowledge Graphs

June 30, 2024
作者: Yifei Zhang, Xintao Wang, Jiaqing Liang, Sirui Xia, Lida Chen, Yanghua Xiao
cs.AI

摘要

大型语言模型(LLMs)在各种自然语言处理(NLP)任务中展现出令人印象深刻的熟练程度,这些任务涉及日益复杂的推理。知识推理作为一种主要推理类型,旨在从现有知识中推导出新知识。虽然知识图谱(KGs)的背景下已被广泛研究,但LLMs中的知识推理仍未被充分探索。在本文中,我们介绍了“知识链”(Chain-of-Knowledge),这是一个包括数据集构建和模型学习方法的知识推理全面框架。对于数据集构建,我们通过在知识图谱上进行规则挖掘创建了KnowReason。对于模型学习,我们观察到由于朴素训练而引起的规则过拟合。因此,我们通过一种模拟内部知识探索人类过程的试错机制增强了CoK。我们对KnowReason进行了大量实验。我们的结果显示了CoK在提升LLMs在知识推理以及一般推理基准测试中的有效性。
English
Large Language Models (LLMs) have exhibited impressive proficiency in various natural language processing (NLP) tasks, which involve increasingly complex reasoning. Knowledge reasoning, a primary type of reasoning, aims at deriving new knowledge from existing one.While it has been widely studied in the context of knowledge graphs (KGs), knowledge reasoning in LLMs remains underexplored. In this paper, we introduce Chain-of-Knowledge, a comprehensive framework for knowledge reasoning, including methodologies for both dataset construction and model learning. For dataset construction, we create KnowReason via rule mining on KGs. For model learning, we observe rule overfitting induced by naive training. Hence, we enhance CoK with a trial-and-error mechanism that simulates the human process of internal knowledge exploration. We conduct extensive experiments with KnowReason. Our results show the effectiveness of CoK in refining LLMs in not only knowledge reasoning, but also general reasoning benchmarkms.

Summary

AI-Generated Summary

PDF122November 28, 2024