通过奖励引导解码控制多模态大语言模型
Controlling Multimodal LLMs via Reward-guided Decoding
August 15, 2025
作者: Oscar Mañas, Pierluca D'Oro, Koustuv Sinha, Adriana Romero-Soriano, Michal Drozdzal, Aishwarya Agrawal
cs.AI
摘要
随着多模态大语言模型(MLLMs)的广泛应用,根据多样化的用户需求对其进行适配变得愈发重要。本文中,我们研究了通过可控解码来调整MLLMs的方法。为此,我们首次提出了一种基于奖励引导的MLLMs解码方法,并展示了其在提升视觉基础能力上的应用。我们的方法包括构建视觉基础奖励模型,并利用这些模型来指导MLLM的解码过程。具体而言,我们构建了两个独立的奖励模型,分别用于控制模型输出中物体精确度和召回率的程度。我们的方法实现了MLLM推理过程的实时可控性,体现在两个方面:首先,通过在解码过程中赋予用户对每个奖励函数相对重要性的控制权,使其能在图像描述任务中动态权衡物体精确度与召回率;其次,通过控制解码过程中的搜索广度,让用户能在测试时计算量与视觉基础程度之间进行权衡。我们在标准物体幻觉基准上评估了该方法,结果表明它在提供对MLLM推理显著可控性的同时,持续优于现有的幻觉缓解方法。
English
As Multimodal Large Language Models (MLLMs) gain widespread applicability, it
is becoming increasingly desirable to adapt them for diverse user needs. In
this paper, we study the adaptation of MLLMs through controlled decoding. To
achieve this, we introduce the first method for reward-guided decoding of MLLMs
and demonstrate its application in improving their visual grounding. Our method
involves building reward models for visual grounding and using them to guide
the MLLM's decoding process. Concretely, we build two separate reward models to
independently control the degree of object precision and recall in the model's
output. Our approach enables on-the-fly controllability of an MLLM's inference
process in two ways: first, by giving control over the relative importance of
each reward function during decoding, allowing a user to dynamically trade off
object precision for recall in image captioning tasks; second, by giving
control over the breadth of the search during decoding, allowing the user to
control the trade-off between the amount of test-time compute and the degree of
visual grounding. We evaluate our method on standard object hallucination
benchmarks, showing that it provides significant controllability over MLLM
inference, while consistently outperforming existing hallucination mitigation
methods.