ChatPaper.aiChatPaper

利用大型語言模型檢測維基百科中的語料庫層級知識不一致性

Detecting Corpus-Level Knowledge Inconsistencies in Wikipedia with Large Language Models

September 27, 2025
作者: Sina J. Semnani, Jirayu Burapacheep, Arpandeep Khatua, Thanawan Atchariyachanvanit, Zheng Wang, Monica S. Lam
cs.AI

摘要

維基百科作為全球最大的開放知識庫,被廣泛使用,並成為訓練大型語言模型(LLMs)和檢索增強生成(RAG)系統的關鍵資源。因此,確保其準確性至關重要。但維基百科的準確性如何,我們又該如何提升它呢? 我們聚焦於不一致性這一特定類型的事實錯誤,並引入了語料庫層級不一致性檢測任務。我們提出了CLAIRE,這是一個結合了LLM推理與檢索的智能系統,旨在揭示潛在的不一致聲明,並提供上下文證據供人工審查。在一項有經驗的維基百科編輯參與的用戶研究中,87.5%的參與者表示使用CLAIRE後信心提升,參與者在相同時間內發現的不一致性增加了64.7%。 結合CLAIRE與人工標註,我們貢獻了WIKICOLLIDE,這是首個真實維基百科不一致性的基準測試集。通過隨機抽樣與CLAIRE輔助分析,我們發現至少有3.3%的英文維基百科事實與其他事實相矛盾,這些不一致性影響了7.3%的FEVEROUS和4.0%的AmbigQA示例。在該數據集上對強基線模型進行基準測試顯示,仍有顯著提升空間:最佳全自動系統的AUROC僅為75.1%。 我們的研究結果表明,矛盾是維基百科中可量化的組成部分,而基於LLM的系統如CLAIRE,能夠為編輯者提供實用工具,幫助他們大規模提升知識的一致性。
English
Wikipedia is the largest open knowledge corpus, widely used worldwide and serving as a key resource for training large language models (LLMs) and retrieval-augmented generation (RAG) systems. Ensuring its accuracy is therefore critical. But how accurate is Wikipedia, and how can we improve it? We focus on inconsistencies, a specific type of factual inaccuracy, and introduce the task of corpus-level inconsistency detection. We present CLAIRE, an agentic system that combines LLM reasoning with retrieval to surface potentially inconsistent claims along with contextual evidence for human review. In a user study with experienced Wikipedia editors, 87.5% reported higher confidence when using CLAIRE, and participants identified 64.7% more inconsistencies in the same amount of time. Combining CLAIRE with human annotation, we contribute WIKICOLLIDE, the first benchmark of real Wikipedia inconsistencies. Using random sampling with CLAIRE-assisted analysis, we find that at least 3.3% of English Wikipedia facts contradict another fact, with inconsistencies propagating into 7.3% of FEVEROUS and 4.0% of AmbigQA examples. Benchmarking strong baselines on this dataset reveals substantial headroom: the best fully automated system achieves an AUROC of only 75.1%. Our results show that contradictions are a measurable component of Wikipedia and that LLM-based systems like CLAIRE can provide a practical tool to help editors improve knowledge consistency at scale.
PDF11September 30, 2025