ChatPaper.aiChatPaper

隐喻之星:基于端到端视觉强化学习的图像隐喻理解与推理

MetaphorStar: Image Metaphor Understanding and Reasoning with End-to-End Visual Reinforcement Learning

February 11, 2026
作者: Chenhao Zhang, Yazhe Niu, Hongsheng Li
cs.AI

摘要

图像隐喻理解仍是当前人工智能系统面临的关键挑战。尽管多模态大语言模型在基础视觉问答任务中表现出色,但在理解视觉内容中蕴含的微妙文化、情感及语境含义时仍存在明显不足。这一困境源于该任务对复杂多跳推理、文化背景及心理理论能力的高要求,而现有模型尚不具备这些能力。为此,我们提出首个面向图像含义推理任务的端到端视觉强化学习框架MetaphorStar,该框架包含三大核心组件:细粒度数据集TFQ-Data、视觉强化学习方法TFQ-GRPO以及结构化评测基准TFQ-Bench。 基于TFQ-Data采用TFQ-GRPO训练的全开源MetaphorStar系列模型,在图像含义推理基准测试中平均性能提升达82.6%。与20余个主流多模态大模型相比,MetaphorStar-32B在选择题与开放题上达到最优水平,在判断题上显著超越顶级闭源模型Gemini-3.0-pro。关键的是,实验表明学习图像含义推理任务能有效提升模型的通用理解能力,特别是复杂视觉推理能力。我们进一步系统分析了模型参数规模、训练数据量、不同模型架构与训练策略的影响,验证了方法的广泛适用性。所有模型权重、数据集及方法代码均已开源:https://metaphorstar.github.io。
English
Metaphorical comprehension in images remains a critical challenge for Nowadays AI systems. While Multimodal Large Language Models (MLLMs) excel at basic Visual Question Answering (VQA), they consistently struggle to grasp the nuanced cultural, emotional, and contextual implications embedded in visual content. This difficulty stems from the task's demand for sophisticated multi-hop reasoning, cultural context, and Theory of Mind (ToM) capabilities, which current models lack. To fill this gap, we propose MetaphorStar, the first end-to-end visual reinforcement learning (RL) framework for image implication tasks. Our framework includes three core components: the fine-grained dataset TFQ-Data, the visual RL method TFQ-GRPO, and the well-structured benchmark TFQ-Bench. Our fully open-source MetaphorStar family, trained using TFQ-GRPO on TFQ-Data, significantly improves performance by an average of 82.6% on the image implication benchmarks. Compared with 20+ mainstream MLLMs, MetaphorStar-32B achieves state-of-the-art (SOTA) on Multiple-Choice Question and Open-Style Question, significantly outperforms the top closed-source model Gemini-3.0-pro on True-False Question. Crucially, our experiments reveal that learning image implication tasks improves the general understanding ability, especially the complex visual reasoning ability. We further provide a systematic analysis of model parameter scaling, training data scaling, and the impact of different model architectures and training strategies, demonstrating the broad applicability of our method. We open-sourced all model weights, datasets, and method code at https://metaphorstar.github.io.
PDF31February 14, 2026