CPPO:面向视觉语言策略优化的对比感知方法
CPPO: Contrastive Perception for Vision Language Policy Optimization
January 1, 2026
作者: Ahmad Rezaei, Mohsen Gholami, Saeed Ranjbar Alvar, Kevin Cannons, Mohammad Asiful Hossain, Zhou Weimin, Shunbo Zhou, Yong Zhang, Mohammad Akbari
cs.AI
摘要
我们提出对比感知策略优化(CPPO),一种用于微调视觉语言模型(VLM)的新方法。虽然强化学习(RL)已推动语言模型的推理能力发展,但将其扩展至多模态推理需同时提升感知与推理能力。现有研究主要依赖显式感知奖励应对这一挑战,但解耦感知标记与推理标记存在困难:需要额外的大语言模型、真实标注数据、强制策略模型分离感知与推理功能,或对全部输出标记 indiscriminately 施加奖励。CPPO通过分析输入图像扰动下模型输出的熵变来检测感知标记,并在RL目标函数中引入对比感知损失(CPL)。该损失函数要求模型在信息保持型扰动下保持输出一致性,在信息消除型扰动下体现敏感性。实验表明,CPPO在无需额外模型的情况下超越了现有感知奖励方法,使训练更高效且具备扩展性。
English
We introduce CPPO, a Contrastive Perception Policy Optimization method for finetuning vision-language models (VLMs). While reinforcement learning (RL) has advanced reasoning in language models, extending it to multimodal reasoning requires improving both the perception and reasoning aspects. Prior works tackle this challenge mainly with explicit perception rewards, but disentangling perception tokens from reasoning tokens is difficult, requiring extra LLMs, ground-truth data, forced separation of perception from reasoning by policy model, or applying rewards indiscriminately to all output tokens. CPPO addresses this problem by detecting perception tokens via entropy shifts in the model outputs under perturbed input images. CPPO then extends the RL objective function with a Contrastive Perception Loss (CPL) that enforces consistency under information-preserving perturbations and sensitivity under information-removing ones. Experiments show that CPPO surpasses previous perception-rewarding methods, while avoiding extra models, making training more efficient and scalable.