ChatPaper.aiChatPaper

万物各归其位:文本到图像模型空间智能基准测试

Everything in Its Place: Benchmarking Spatial Intelligence of Text-to-Image Models

January 28, 2026
作者: Zengbin Wang, Xuecai Hu, Yong Wang, Feng Xiong, Man Zhang, Xiangxiang Chu
cs.AI

摘要

文本到图像生成模型在生成高保真度图像方面取得了显著成功,但在处理复杂空间关系(如空间感知、推理或交互)时往往表现不佳。由于现有基准测试的提示文本普遍存在内容简短或信息稀疏的问题,这些关键维度长期被忽视。本文提出SpatialGenEval——一个系统评估T2I模型空间智能的新基准,涵盖两大核心维度:(1)该基准包含25个真实场景下的1,230条长文本密集提示,每条提示整合10个空间子领域及对应的10组多选题对,内容涵盖物体位置、布局到遮挡关系与因果推理等多个层面。通过对21个前沿模型的广泛评测,我们发现高阶空间推理仍是当前模型的主要瓶颈。(2)为证明信息密集型设计超越简单评估的实用价值,我们同步构建了SpatialT2I数据集。该数据集包含15,400个经重写的文本-图像对,在保持信息密度的同时确保图像一致性。基于主流基础模型(Stable Diffusion-XL、Uniworld-V1、OmniGen2)的微调实验表明,该方法能带来稳定的性能提升(+4.2%、+5.7%、+4.4%)并生成更具真实感的空间关系效果,为通过数据中心化路径实现T2I模型的空间智能提供了新范式。
English
Text-to-image (T2I) models have achieved remarkable success in generating high-fidelity images, but they often fail in handling complex spatial relationships, e.g., spatial perception, reasoning, or interaction. These critical aspects are largely overlooked by current benchmarks due to their short or information-sparse prompt design. In this paper, we introduce SpatialGenEval, a new benchmark designed to systematically evaluate the spatial intelligence of T2I models, covering two key aspects: (1) SpatialGenEval involves 1,230 long, information-dense prompts across 25 real-world scenes. Each prompt integrates 10 spatial sub-domains and corresponding 10 multi-choice question-answer pairs, ranging from object position and layout to occlusion and causality. Our extensive evaluation of 21 state-of-the-art models reveals that higher-order spatial reasoning remains a primary bottleneck. (2) To demonstrate that the utility of our information-dense design goes beyond simple evaluation, we also construct the SpatialT2I dataset. It contains 15,400 text-image pairs with rewritten prompts to ensure image consistency while preserving information density. Fine-tuned results on current foundation models (i.e., Stable Diffusion-XL, Uniworld-V1, OmniGen2) yield consistent performance gains (+4.2%, +5.7%, +4.4%) and more realistic effects in spatial relations, highlighting a data-centric paradigm to achieve spatial intelligence in T2I models.
PDF993January 31, 2026