ChatPaper.aiChatPaper

LUMINA:基於上下文知識信號檢測RAG系統中的幻覺現象

LUMINA: Detecting Hallucinations in RAG System with Context-Knowledge Signals

September 26, 2025
作者: Min-Hsuan Yeh, Yixuan Li, Tanwi Mallick
cs.AI

摘要

檢索增強生成(Retrieval-Augmented Generation, RAG)旨在通過將回應基於檢索到的文獻來減少大型語言模型(LLMs)中的幻覺現象。然而,即便在提供正確且充分的上下文情況下,基於RAG的LLMs仍會產生幻覺。一系列研究指出,這源於模型如何利用外部上下文與其內部知識之間的不平衡,並且已有幾種方法嘗試量化這些信號以進行幻覺檢測。然而,現有方法需要大量的超參數調整,限制了其泛化能力。我們提出了LUMINA,這是一個新穎的框架,通過上下文-知識信號來檢測RAG系統中的幻覺:外部上下文的利用通過分佈距離來量化,而內部知識的利用則通過追蹤預測標記在變壓器層中的演變來測量。我們進一步引入了一個框架,用於統計驗證這些測量結果。在常見的RAG幻覺基準測試和四個開源LLMs上的實驗表明,LUMINA在AUROC和AUPRC得分上始終保持高水平,在HalluRAG上比先前的基於利用的方法高出最多+13%的AUROC。此外,LUMINA在對檢索質量和模型匹配的放寬假設下仍保持穩健,兼具有效性和實用性。
English
Retrieval-Augmented Generation (RAG) aims to mitigate hallucinations in large language models (LLMs) by grounding responses in retrieved documents. Yet, RAG-based LLMs still hallucinate even when provided with correct and sufficient context. A growing line of work suggests that this stems from an imbalance between how models use external context and their internal knowledge, and several approaches have attempted to quantify these signals for hallucination detection. However, existing methods require extensive hyperparameter tuning, limiting their generalizability. We propose LUMINA, a novel framework that detects hallucinations in RAG systems through context-knowledge signals: external context utilization is quantified via distributional distance, while internal knowledge utilization is measured by tracking how predicted tokens evolve across transformer layers. We further introduce a framework for statistically validating these measurements. Experiments on common RAG hallucination benchmarks and four open-source LLMs show that LUMINA achieves consistently high AUROC and AUPRC scores, outperforming prior utilization-based methods by up to +13% AUROC on HalluRAG. Moreover, LUMINA remains robust under relaxed assumptions about retrieval quality and model matching, offering both effectiveness and practicality.
PDF82September 30, 2025