DeepRAG:為大型語言模型逐步思考檢索
DeepRAG: Thinking to Retrieval Step by Step for Large Language Models
February 3, 2025
作者: Xinyan Guan, Jiali Zeng, Fandong Meng, Chunlei Xin, Yaojie Lu, Hongyu Lin, Xianpei Han, Le Sun, Jie Zhou
cs.AI
摘要
大型語言模型(LLMs)在推理方面展現出卓越的潛力,但由於參數化知識的時效性、準確性和覆蓋範圍,仍然存在嚴重的事實幻覺問題。同時,將推理與檢索增強生成(RAG)相結合仍然具有挑戰性,原因在於任務分解不夠有效以及檢索冗餘,這可能引入噪音並降低回應質量。本文提出了DeepRAG,一個將檢索增強推理建模為馬可夫決策過程(MDP)的框架,實現了策略性和適應性檢索。通過迭代地分解查詢,DeepRAG 在每個步驟動態確定是檢索外部知識還是依賴參數化推理。實驗表明,DeepRAG 提高了檢索效率,同時將答案準確性提高了 21.99%,展示了其在優化檢索增強推理方面的有效性。
English
Large Language Models (LLMs) have shown remarkable potential in reasoning
while they still suffer from severe factual hallucinations due to timeliness,
accuracy, and coverage of parametric knowledge. Meanwhile, integrating
reasoning with retrieval-augmented generation (RAG) remains challenging due to
ineffective task decomposition and redundant retrieval, which can introduce
noise and degrade response quality. In this paper, we propose DeepRAG, a
framework that models retrieval-augmented reasoning as a Markov Decision
Process (MDP), enabling strategic and adaptive retrieval. By iteratively
decomposing queries, DeepRAG dynamically determines whether to retrieve
external knowledge or rely on parametric reasoning at each step. Experiments
show that DeepRAG improves retrieval efficiency while improving answer accuracy
by 21.99%, demonstrating its effectiveness in optimizing retrieval-augmented
reasoning.Summary
AI-Generated Summary