QuCo-RAG:基於預訓練語料庫量化不確定性的動態檢索增強生成
QuCo-RAG: Quantifying Uncertainty from the Pre-training Corpus for Dynamic Retrieval-Augmented Generation
December 22, 2025
作者: Dehai Min, Kailin Zhang, Tongtong Wu, Lu Cheng
cs.AI
摘要
動態檢索增強生成技術通過在生成過程中自適應地決定何時進行檢索,以緩解大型語言模型中的幻覺問題。然而現有方法依賴模型內部信號(如對數機率、熵),這些信號本質上不可靠,因為大型語言模型通常存在校準不足問題,且經常對錯誤輸出表現出高置信度。我們提出QuCo-RAG方法,將依賴主觀置信度的策略轉向基於預訓練數據計算的客觀統計量。該方法通過兩階段量化不確定性:(1)生成前識別低頻實體以定位長尾知識缺口;(2)生成時驗證實體在預訓練語料庫中的共現情況,零共現往往標誌著幻覺風險。兩階段均利用Infini-gram對4萬億詞元進行毫秒級延遲查詢,當不確定性較高時觸發檢索。在多跳問答基準測試中,QuCo-RAG在OLMo-2模型上相比頂尖基線實現5-12分的精確匹配提升,並能有效遷移至預訓練數據未公開的模型(Llama、Qwen、GPT),最高提升精確匹配達14分。在生物醫學問答領域的泛化測試進一步驗證了該範式的魯棒性。這些成果確立了基於語料庫驗證的動態檢索增強生成作為一種原理清晰、實踐中模型無關的新範式。程式碼已公開於https://github.com/ZhishanQ/QuCo-RAG。
English
Dynamic Retrieval-Augmented Generation adaptively determines when to retrieve during generation to mitigate hallucinations in large language models (LLMs). However, existing methods rely on model-internal signals (e.g., logits, entropy), which are fundamentally unreliable because LLMs are typically ill-calibrated and often exhibit high confidence in erroneous outputs. We propose QuCo-RAG, which shifts from subjective confidence to objective statistics computed from pre-training data. Our method quantifies uncertainty through two stages: (1) before generation, we identify low-frequency entities indicating long-tail knowledge gaps; (2) during generation, we verify entity co-occurrence in the pre-training corpus, where zero co-occurrence often signals hallucination risk. Both stages leverage Infini-gram for millisecond-latency queries over 4 trillion tokens, triggering retrieval when uncertainty is high. Experiments on multi-hop QA benchmarks show QuCo-RAG achieves EM gains of 5--12 points over state-of-the-art baselines with OLMo-2 models, and transfers effectively to models with undisclosed pre-training data (Llama, Qwen, GPT), improving EM by up to 14 points. Domain generalization on biomedical QA further validates the robustness of our paradigm. These results establish corpus-grounded verification as a principled, practically model-agnostic paradigm for dynamic RAG. Our code is publicly available at https://github.com/ZhishanQ/QuCo-RAG.