ChatPaper.aiChatPaper

超越像素:基于图式驱动智能推理的视觉隐喻迁移

Beyond Pixels: Visual Metaphor Transfer via Schema-Driven Agentic Reasoning

February 1, 2026
作者: Yu Xu, Yuxin Zhang, Juan Cao, Lin Gao, Chunyu Wang, Oliver Deussen, Tong-Yee Lee, Fan Tang
cs.AI

摘要

视觉隐喻作为人类创造力的高阶形式,通过跨域语义融合将抽象概念转化为具有冲击力的视觉修辞。尽管生成式AI取得了显著进展,现有模型仍主要局限于像素级指令对齐与表层特征保持,未能捕捉真正隐喻生成所需的底层抽象逻辑。为弥补这一差距,我们提出视觉隐喻迁移任务,要求模型自主解耦参考图像中的"创作精髓",并将该抽象逻辑重现在用户指定的目标主体上。我们受认知科学启发,提出多智能体框架,通过新颖的图式语法实现概念整合理论的操作化。这种结构化表征将关系不变性从具体视觉实体中解耦,为跨域逻辑重实例化奠定严谨基础。我们的流水线通过专业化智能体协作系统执行视觉隐喻迁移:感知智能体将参考图像提炼为图式,迁移智能体维持泛型空间不变性以发现适配载体,生成智能体负责高保真合成,分层诊断智能体则模拟专业评论家,通过闭环回溯机制在抽象逻辑、组件选择及提示编码等层面识别并修正错误。大量实验与人工评估表明,本方法在隐喻一致性、类比恰当性与视觉创造力方面显著优于现有最优基线,为广告与媒体领域的自动化高影响力创意应用铺平道路。源代码将公开发布。
English
A visual metaphor constitutes a high-order form of human creativity, employing cross-domain semantic fusion to transform abstract concepts into impactful visual rhetoric. Despite the remarkable progress of generative AI, existing models remain largely confined to pixel-level instruction alignment and surface-level appearance preservation, failing to capture the underlying abstract logic necessary for genuine metaphorical generation. To bridge this gap, we introduce the task of Visual Metaphor Transfer (VMT), which challenges models to autonomously decouple the "creative essence" from a reference image and re-materialize that abstract logic onto a user-specified target subject. We propose a cognitive-inspired, multi-agent framework that operationalizes Conceptual Blending Theory (CBT) through a novel Schema Grammar ("G"). This structured representation decouples relational invariants from specific visual entities, providing a rigorous foundation for cross-domain logic re-instantiation. Our pipeline executes VMT through a collaborative system of specialized agents: a perception agent that distills the reference into a schema, a transfer agent that maintains generic space invariance to discover apt carriers, a generation agent for high-fidelity synthesis and a hierarchical diagnostic agent that mimics a professional critic, performing closed-loop backtracking to identify and rectify errors across abstract logic, component selection, and prompt encoding. Extensive experiments and human evaluations demonstrate that our method significantly outperforms SOTA baselines in metaphor consistency, analogy appropriateness, and visual creativity, paving the way for automated high-impact creative applications in advertising and media. Source code will be made publicly available.
PDF152February 7, 2026