ChatPaper.aiChatPaper

知識鏈:從知識圖譜學習,將知識推理整合到大型語言模型中

Chain-of-Knowledge: Integrating Knowledge Reasoning into Large Language Models by Learning from Knowledge Graphs

June 30, 2024
作者: Yifei Zhang, Xintao Wang, Jiaqing Liang, Sirui Xia, Lida Chen, Yanghua Xiao
cs.AI

摘要

大型語言模型(LLMs)在各種自然語言處理(NLP)任務中展現出令人印象深刻的熟練度,這些任務涉及日益複雜的推理。知識推理是一種主要的推理類型,旨在從現有知識中推導出新知識。儘管知識推理在知識圖(KGs)的背景下得到廣泛研究,但LLMs中的知識推理仍未得到充分探索。本文介紹了一個名為知識鏈(Chain-of-Knowledge)的全面框架,用於知識推理,包括數據集構建和模型學習的方法論。在數據集構建方面,我們通過對知識圖進行規則挖掘創建了KnowReason。在模型學習方面,我們觀察到由於單純訓練而引起的規則過度擬合。因此,我們通過一種試錯機制增強了CoK,該機制模擬了內部知識探索的人類過程。我們對KnowReason進行了大量實驗。我們的結果顯示了CoK在精煉LLMs方面的有效性,不僅在知識推理方面,還在一般推理基準測試中。
English
Large Language Models (LLMs) have exhibited impressive proficiency in various natural language processing (NLP) tasks, which involve increasingly complex reasoning. Knowledge reasoning, a primary type of reasoning, aims at deriving new knowledge from existing one.While it has been widely studied in the context of knowledge graphs (KGs), knowledge reasoning in LLMs remains underexplored. In this paper, we introduce Chain-of-Knowledge, a comprehensive framework for knowledge reasoning, including methodologies for both dataset construction and model learning. For dataset construction, we create KnowReason via rule mining on KGs. For model learning, we observe rule overfitting induced by naive training. Hence, we enhance CoK with a trial-and-error mechanism that simulates the human process of internal knowledge exploration. We conduct extensive experiments with KnowReason. Our results show the effectiveness of CoK in refining LLMs in not only knowledge reasoning, but also general reasoning benchmarkms.

Summary

AI-Generated Summary

PDF122November 28, 2024