ChatPaper.aiChatPaper

微行动:通过可执行的自我推理缓解问答中的知识冲突

Micro-Act: Mitigate Knowledge Conflict in Question Answering via Actionable Self-Reasoning

June 5, 2025
作者: Nan Huo, Jinyang Li, Bowen Qin, Ge Qu, Xiaolong Li, Xiaodong Li, Chenhao Ma, Reynold Cheng
cs.AI

摘要

檢索增強生成(RAG)系統普遍面臨知識衝突的問題,即檢索到的外部知識與大型語言模型(LLMs)內在的參數化知識相矛盾。這對下游任務(如問答系統QA)的表現產生了不利影響。現有方法通常試圖通過直接並排比較兩種知識來源來緩解衝突,但這種做法可能會使LLMs陷入冗長或不相關的上下文之中,最終阻礙其識別和解決不一致性的能力。為解決這一問題,我們提出了Micro-Act框架,該框架具有分層的行動空間,能夠自動感知上下文複雜度,並自適應地將每個知識來源分解為一系列細粒度的比較。這些比較被表示為可執行的步驟,從而實現超越表層上下文的推理。通過在五個基準數據集上的廣泛實驗,Micro-Act在所有五個數據集和三種衝突類型上均顯著提升了QA準確率,尤其是在時間和語義類型上,所有基線方法均表現不佳。更重要的是,Micro-Act在非衝突問題上同時展現出穩健的性能,凸顯了其在實際RAG應用中的實用價值。
English
Retrieval-Augmented Generation (RAG) systems commonly suffer from Knowledge Conflicts, where retrieved external knowledge contradicts the inherent, parametric knowledge of large language models (LLMs). It adversely affects performance on downstream tasks such as question answering (QA). Existing approaches often attempt to mitigate conflicts by directly comparing two knowledge sources in a side-by-side manner, but this can overwhelm LLMs with extraneous or lengthy contexts, ultimately hindering their ability to identify and mitigate inconsistencies. To address this issue, we propose Micro-Act a framework with a hierarchical action space that automatically perceives context complexity and adaptively decomposes each knowledge source into a sequence of fine-grained comparisons. These comparisons are represented as actionable steps, enabling reasoning beyond the superficial context. Through extensive experiments on five benchmark datasets, Micro-Act consistently achieves significant increase in QA accuracy over state-of-the-art baselines across all 5 datasets and 3 conflict types, especially in temporal and semantic types where all baselines fail significantly. More importantly, Micro-Act exhibits robust performance on non-conflict questions simultaneously, highlighting its practical value in real-world RAG applications.
PDF31June 6, 2025