ChatPaper.aiChatPaper

UniDoc-RL:基于分层动作与密集奖励的从粗到精视觉检索增强生成系统

UniDoc-RL: Coarse-to-Fine Visual RAG with Hierarchical Actions and Dense Rewards

April 16, 2026
作者: Jun Wang, Shuo Tan, Zelong Sun, Tiancheng Gu, Yongle Zhao, Ziyong Feng, Kaicheng Yang, Cewu Lu
cs.AI

摘要

检索增强生成(RAG)通过引入外部视觉知识扩展了大型视觉语言模型(LVLM)的能力。然而,现有视觉RAG系统通常依赖通用检索信号,忽略了复杂推理所需的细粒度视觉语义。为突破这一局限,我们提出UniDoc-RL——一个统一的强化学习框架,使LVLM智能体能够协同执行检索、重排序、主动视觉感知与推理。该框架将视觉信息获取建模为具有分层动作空间的序列决策问题:从粗粒度的文档检索逐步细化到细粒度的图像选择与主动区域裁剪,从而抑制无关内容并聚焦信息密集区域。为实现端到端高效训练,我们设计了密集多奖励机制,为每个动作提供任务感知监督。基于群组相对策略优化(GRPO),UniDoc-RL无需依赖独立的价值网络即可实现多目标行为对齐。为支撑该训练范式,我们构建了包含细粒度动作标注的高质量推理轨迹数据集。在三个基准测试上的实验表明,UniDoc-RL持续超越现有最优基线模型,相较基于强化学习的先前方法最高提升17.7%。
English
Retrieval-Augmented Generation (RAG) extends Large Vision-Language Models (LVLMs) with external visual knowledge. However, existing visual RAG systems typically rely on generic retrieval signals that overlook the fine-grained visual semantics essential for complex reasoning. To address this limitation, we propose UniDoc-RL, a unified reinforcement learning framework in which an LVLM agent jointly performs retrieval, reranking, active visual perception, and reasoning. UniDoc-RL formulates visual information acquisition as a sequential decision-making problem with a hierarchical action space. Specifically, it progressively refines visual evidence from coarse-grained document retrieval to fine-grained image selection and active region cropping, allowing the model to suppress irrelevant content and attend to information-dense regions. For effective end-to-end training, we introduce a dense multi-reward scheme that provides task-aware supervision for each action. Based on Group Relative Policy Optimization (GRPO), UniDoc-RL aligns agent behavior with multiple objectives without relying on a separate value network. To support this training paradigm, we curate a comprehensive dataset of high-quality reasoning trajectories with fine-grained action annotations. Experiments on three benchmarks demonstrate that UniDoc-RL consistently surpasses state-of-the-art baselines, yielding up to 17.7% gains over prior RL-based methods.
PDF71April 18, 2026