BubbleRAG:面向黑盒知识图谱的证据驱动检索增强生成
BubbleRAG: Evidence-Driven Retrieval-Augmented Generation for Black-Box Knowledge Graphs
March 19, 2026
作者: Duyi Pan, Tianao Lou, Xin Li, Haoze Song, Yiwen Wu, Mengyi Deng, Mingyu Yang, Wei Wang
cs.AI
摘要
大型语言模型(LLMs)在知识密集型任务中常出现幻觉问题。基于图谱的检索增强生成(RAG)已成为一种有效解决方案,但现有方法在处理黑盒知识图谱(即图谱结构和模式未知)时存在根本性的召回率与精确度局限。我们识别出导致召回损失(语义实例不确定性和结构路径不确定性)与精确度损失(证据对比不确定性)的三重核心挑战。针对这些挑战,我们将检索任务形式化为最优信息子图检索(OISR)问题——一种群斯坦纳树变体,并证明其具有NP难与APX难特性。提出BubbleRAG训练无关框架,通过语义锚点分组、启发式气泡扩展发现候选证据图(CEG)、复合排序及推理感知扩展,系统化优化召回与精确度指标。在多跳问答基准测试中,BubbleRAG在F1值与准确率上均超越现有强基线方法,达到最先进水平,且保持即插即用特性。
English
Large Language Models (LLMs) exhibit hallucinations in knowledge-intensive tasks. Graph-based retrieval augmented generation (RAG) has emerged as a promising solution, yet existing approaches suffer from fundamental recall and precision limitations when operating over black-box knowledge graphs -- graphs whose schema and structure are unknown in advance. We identify three core challenges that cause recall loss (semantic instantiation uncertainty and structural path uncertainty) and precision loss (evidential comparison uncertainty). To address these challenges, we formalize the retrieval task as the Optimal Informative Subgraph Retrieval (OISR) problem -- a variant of Group Steiner Tree -- and prove it to be NP-hard and APX-hard. We propose BubbleRAG, a training-free pipeline that systematically optimizes for both recall and precision through semantic anchor grouping, heuristic bubble expansion to discover candidate evidence graphs (CEGs), composite ranking, and reasoning-aware expansion. Experiments on multi-hop QA benchmarks demonstrate that BubbleRAG achieves state-of-the-art results, outperforming strong baselines in both F1 and accuracy while remaining plug-and-play.