ChatPaper.aiChatPaper

ViGoR:通过细粒度奖励建模改善大型视觉语言模型的视觉定位

ViGoR: Improving Visual Grounding of Large Vision Language Models with Fine-Grained Reward Modeling

February 9, 2024
作者: Siming Yan, Min Bai, Weifeng Chen, Xiong Zhou, Qixing Huang, Li Erran Li
cs.AI

摘要

通过将自然语言理解、生成能力以及大型语言模型的知识广度与图像感知相结合,最近的大型视觉语言模型(LVLMs)展示了在现实世界中前所未有的推理能力。然而,生成的文本往往存在与视觉输入不准确的基础相关的问题,导致诸如产生不存在的场景元素、缺失场景中重要部分以及推断对象之间的属性和关系时出现错误等问题。为了解决这些问题,我们引入了一种新颖的框架,ViGoR(通过细粒度奖励建模实现视觉基础)。该框架利用细粒度奖励建模显著增强了LVLMs在预训练基线上的视觉基础。这种改进通过使用更为经济的人类评估而非完全监督以及自动化方法有效实现。我们通过多个基准测试展示了我们方法的有效性。此外,我们构建了一个专门设计用于验证LVLMs视觉基础能力的全面且具有挑战性的数据集。最后,我们计划发布包含大约16,000张图像和生成文本对的细粒度评估的人类注释,以促进社区中相关研究的发展。
English
By combining natural language understanding and the generation capabilities and breadth of knowledge of large language models with image perception, recent large vision language models (LVLMs) have shown unprecedented reasoning capabilities in the real world. However, the generated text often suffers from inaccurate grounding in the visual input, resulting in errors such as hallucinating nonexistent scene elements, missing significant parts of the scene, and inferring incorrect attributes and relationships between objects. To address these issues, we introduce a novel framework, ViGoR (Visual Grounding Through Fine-Grained Reward Modeling) that utilizes fine-grained reward modeling to significantly enhance the visual grounding of LVLMs over pre-trained baselines. This improvement is efficiently achieved using much cheaper human evaluations instead of full supervisions, as well as automated methods. We show the effectiveness of our approach through numerous metrics on several benchmarks. Additionally, we construct a comprehensive and challenging dataset specifically designed to validate the visual grounding capabilities of LVLMs. Finally, we plan to release our human annotation comprising approximately 16,000 images and generated text pairs with fine-grained evaluations to contribute to related research in the community.
PDF152December 15, 2024