ChatPaper.aiChatPaper

LLM獨立自適應RAG:讓問題本身說話

LLM-Independent Adaptive RAG: Let the Question Speak for Itself

May 7, 2025
作者: Maria Marina, Nikolay Ivanov, Sergey Pletenev, Mikhail Salnikov, Daria Galimzianova, Nikita Krayko, Vasily Konovalov, Alexander Panchenko, Viktor Moskvoretskii
cs.AI

摘要

大型語言模型(LLMs)容易產生幻覺,而檢索增強生成(RAG)有助於緩解這一問題,但代價是高昂的計算成本,同時存在誤導信息的風險。自適應檢索旨在僅在必要時進行檢索,但現有方法依賴於基於LLM的不確定性估計,這仍然效率低下且不切實際。在本研究中,我們引入了基於外部信息的輕量級、獨立於LLM的自適應檢索方法。我們研究了27個特徵,將其分為7組,並探討了它們的混合組合。我們在6個問答數據集上評估了這些方法,評估了問答性能和效率。結果表明,我們的方法在性能上與複雜的基於LLM的方法相當,同時實現了顯著的效率提升,展示了外部信息在自適應檢索中的潛力。
English
Large Language Models~(LLMs) are prone to hallucinations, and Retrieval-Augmented Generation (RAG) helps mitigate this, but at a high computational cost while risking misinformation. Adaptive retrieval aims to retrieve only when necessary, but existing approaches rely on LLM-based uncertainty estimation, which remain inefficient and impractical. In this study, we introduce lightweight LLM-independent adaptive retrieval methods based on external information. We investigated 27 features, organized into 7 groups, and their hybrid combinations. We evaluated these methods on 6 QA datasets, assessing the QA performance and efficiency. The results show that our approach matches the performance of complex LLM-based methods while achieving significant efficiency gains, demonstrating the potential of external information for adaptive retrieval.

Summary

AI-Generated Summary

PDF71May 8, 2025