UniDoc-RL:基於分層動作與密集獎勵的從粗到精視覺檢索增強生成系統
UniDoc-RL: Coarse-to-Fine Visual RAG with Hierarchical Actions and Dense Rewards
April 16, 2026
作者: Jun Wang, Shuo Tan, Zelong Sun, Tiancheng Gu, Yongle Zhao, Ziyong Feng, Kaicheng Yang, Cewu Lu
cs.AI
摘要
基於檢索增強生成(RAG)的方法通過引入外部視覺知識來擴展大型視覺語言模型(LVLM)的能力。然而,現有視覺RAG系統通常依賴通用檢索信號,忽略了複雜推理所需的細粒度視覺語義。為解決這一侷限性,我們提出UniDoc-RL——一個統一的強化學習框架,其中LVLM智能體協同執行檢索、重排序、主動視覺感知與推理任務。UniDoc-RL將視覺信息獲取建模為具有層次化動作空間的序列決策問題:從粗粒度的文檔檢索逐步過渡到細粒度的圖像選擇與主動區域裁剪,使模型能夠抑制無關內容並聚焦於信息密集區域。為實現有效的端到端訓練,我們設計了密集多獎勵機制,為每個動作提供任務感知監督。基於群組相對策略優化(GRPO)方法,UniDoc-RL無需依賴獨立價值網絡即可實現智能體行為與多目標的對齊。為支持此訓練範式,我們構建了包含細粒度動作標註的高質量推理軌跡數據集。在三個基準測試上的實驗表明,UniDoc-RL持續超越現有頂尖基線模型,相比基於強化學習的先前方法最高提升達17.7%。
English
Retrieval-Augmented Generation (RAG) extends Large Vision-Language Models (LVLMs) with external visual knowledge. However, existing visual RAG systems typically rely on generic retrieval signals that overlook the fine-grained visual semantics essential for complex reasoning. To address this limitation, we propose UniDoc-RL, a unified reinforcement learning framework in which an LVLM agent jointly performs retrieval, reranking, active visual perception, and reasoning. UniDoc-RL formulates visual information acquisition as a sequential decision-making problem with a hierarchical action space. Specifically, it progressively refines visual evidence from coarse-grained document retrieval to fine-grained image selection and active region cropping, allowing the model to suppress irrelevant content and attend to information-dense regions. For effective end-to-end training, we introduce a dense multi-reward scheme that provides task-aware supervision for each action. Based on Group Relative Policy Optimization (GRPO), UniDoc-RL aligns agent behavior with multiple objectives without relying on a separate value network. To support this training paradigm, we curate a comprehensive dataset of high-quality reasoning trajectories with fine-grained action annotations. Experiments on three benchmarks demonstrate that UniDoc-RL consistently surpasses state-of-the-art baselines, yielding up to 17.7% gains over prior RL-based methods.