**COOPER:空间智能中协同感知与推理的统一模型**
COOPER: A Unified Model for Cooperative Perception and Reasoning in Spatial Intelligence
December 4, 2025
作者: Zefeng Zhang, Xiangzhao Hao, Hengzhu Tang, Zhenyu Zhang, Jiawei Sheng, Xiaodong Li, Zhenyang Li, Li Gao, Daiting Shi, Dawei Yin, Tingwen Liu
cs.AI
摘要
视觉空间推理对于多模态大语言模型理解物体属性与空间关系至关重要,但现有模型仍难以实现三维感知推理。当前方法通常通过两种孤立路径进行增强:或在感知层面为RGB输入添加深度、分割等辅助模态,或在推理层面基于空间视觉问答数据集进行训练并应用强化学习。本研究探索统一式多模态大语言模型能否通过自适应交错推理机制,发展出增强空间感知的内在能力,从而实现更强的空间智能。我们提出COOPER模型,该统一框架利用深度与分割作为辅助模态,通过两阶段训练获得辅助模态生成与自适应交错推理能力。COOPER在保持通用性能的同时,将空间推理能力平均提升6.91%。值得注意的是,仅进行辅助模态生成训练的变体模型在距离与尺寸估计任务上亦获得7.92%的性能增益,这表明学习生成辅助模态有助于模型内化空间知识并强化空间理解能力。
English
Visual Spatial Reasoning is crucial for enabling Multimodal Large Language Models (MLLMs) to understand object properties and spatial relationships, yet current models still struggle with 3D-aware reasoning. Existing approaches typically enhance either perception, by augmenting RGB inputs with auxiliary modalities such as depth and segmentation, or reasoning, by training on spatial VQA datasets and applying reinforcement learning, and thus treat these two aspects in isolation. In this work, we investigate whether a unified MLLM can develop an intrinsic ability to enhance spatial perception and, through adaptive interleaved reasoning, achieve stronger spatial intelligence. We propose COOPER, a unified MLLM that leverages depth and segmentation as auxiliary modalities and is trained in two stages to acquire auxiliary modality generation and adaptive, interleaved reasoning capabilities. COOPER achieves an average 6.91\% improvement in spatial reasoning while maintaining general performance. Moreover, even a variant trained only for auxiliary modality generation attains a 7.92\% gain on distance and size estimation, suggesting that learning to generate auxiliary modalities helps internalize spatial knowledge and strengthen spatial understanding.