VisPlay:基于图像自我进化的视觉语言模型
VisPlay: Self-Evolving Vision-Language Models from Images
November 19, 2025
作者: Yicheng He, Chengsong Huang, Zongxia Li, Jiaxin Huang, Yonghui Yang
cs.AI
摘要
强化学习(RL)为提升视觉语言模型(VLMs)在复杂推理任务上的表现提供了原则性框架。然而,现有RL方法通常依赖人工标注标签或任务特定启发式规则来定义可验证奖励,这两者均成本高昂且难以扩展。我们提出VisPlay——一种自演化的RL框架,使VLMs能够利用大量无标注图像数据自主提升推理能力。该框架从单一基础VLM出发,将模型分配至两个交互角色:图像条件提问者负责构建具有挑战性但可回答的视觉问题,多模态推理者则生成银标答案。通过群体相对策略优化(GRPO)进行联合训练,该算法引入多样性与难度奖励机制,平衡生成问题的复杂性与银标答案的质量。VisPlay在Qwen2.5-VL和MiMo-VL两个模型系列上均实现高效扩展,在MM-Vet和MMMU等八个基准测试中持续提升视觉推理、组合泛化及幻觉抑制能力,为自演化多模态智能提供了可扩展路径。项目页面详见https://bruno686.github.io/VisPlay/
English
Reinforcement learning (RL) provides a principled framework for improving Vision-Language Models (VLMs) on complex reasoning tasks. However, existing RL approaches often rely on human-annotated labels or task-specific heuristics to define verifiable rewards, both of which are costly and difficult to scale. We introduce VisPlay, a self-evolving RL framework that enables VLMs to autonomously improve their reasoning abilities using large amounts of unlabeled image data. Starting from a single base VLM, VisPlay assigns the model into two interacting roles: an Image-Conditioned Questioner that formulates challenging yet answerable visual questions, and a Multimodal Reasoner that generates silver responses. These roles are jointly trained with Group Relative Policy Optimization (GRPO), which incorporates diversity and difficulty rewards to balance the complexity of generated questions with the quality of the silver answers. VisPlay scales efficiently across two model families. When trained on Qwen2.5-VL and MiMo-VL, VisPlay achieves consistent improvements in visual reasoning, compositional generalization, and hallucination reduction across eight benchmarks, including MM-Vet and MMMU, demonstrating a scalable path toward self-evolving multimodal intelligence. The project page is available at https://bruno686.github.io/VisPlay/