从生成视角探索空间智能
Exploring Spatial Intelligence from a Generative Perspective
April 22, 2026
作者: Muzhi Zhu, Shunyao Jiang, Huanyi Zheng, Zekai Luo, Hao Zhong, Anzhou Li, Kaijun Wang, Jintao Rong, Yang Liu, Hao Chen, Tao Lin, Chunhua Shen
cs.AI
摘要
空间智能对多模态大语言模型至关重要,然而现有基准主要从理解维度进行评估。我们探究现代生成式或统一多模态模型是否具备生成式空间智能(GSI)——即在图像生成过程中遵循并操纵三维空间约束的能力,以及这种能力能否被量化或提升。我们推出GSI-Bench,首个通过空间锚定图像编辑量化GSI的基准,其包含两个互补组件:基于三维先验引导生成与筛选流程构建的高质量真实数据集GSI-Real,以及具备可控空间操作与全自动标注的大规模合成基准GSI-Syn。结合统一评估协议,GSI-Bench能够实现可扩展、模型无关的空间合规性与编辑保真度评估。实验表明,在GSI-Syn上对统一多模态模型进行微调,能在合成与真实任务中均取得显著提升,更引人注目的是,其还能增强下游空间理解能力。这首次明确证明生成式训练可实质性强化空间推理能力,为推进多模态模型的空间智能开辟了新路径。
English
Spatial intelligence is essential for multimodal large language models, yet current benchmarks largely assess it only from an understanding perspective. We ask whether modern generative or unified multimodal models also possess generative spatial intelligence (GSI), the ability to respect and manipulate 3D spatial constraints during image generation, and whether such capability can be measured or improved. We introduce GSI-Bench, the first benchmark designed to quantify GSI through spatially grounded image editing. It consists of two complementary components: GSI-Real, a high-quality real-world dataset built via a 3D-prior-guided generation and filtering pipeline, and GSI-Syn, a large-scale synthetic benchmark with controllable spatial operations and fully automated labeling. Together with a unified evaluation protocol, GSI-Bench enables scalable, model-agnostic assessment of spatial compliance and editing fidelity. Experiments show that fine-tuning unified multimodal models on GSI-Syn yields substantial gains on both synthetic and real tasks and, strikingly, also improves downstream spatial understanding. This provides the first clear evidence that generative training can tangibly strengthen spatial reasoning, establishing a new pathway for advancing spatial intelligence in multimodal models.