空间控制:为3D生成模型引入测试时空间调控机制
SpaceControl: Introducing Test-Time Spatial Control to 3D Generative Modeling
December 5, 2025
作者: Elisabetta Fedele, Francis Engelmann, Ian Huang, Or Litany, Marc Pollefeys, Leonidas Guibas
cs.AI
摘要
三维资产的生成方法近期取得显著进展,但如何实现直观精确的几何控制仍是核心挑战。现有方法主要依赖文本或图像提示,但这些方式在几何精度上存在局限:语言描述易产生歧义,而图像编辑又较为繁琐。本研究提出SpaceControl,一种无需训练即可在测试阶段实现三维生成显式空间控制的创新方法。该方法支持从粗糙几何基元到精细网格的多样化几何输入,并能与预训练的现代生成模型无缝集成,无需任何额外训练。通过可控参数,用户可在几何保真度与输出真实感之间灵活权衡。大量定量评估与用户研究表明,SpaceControl在保持高视觉质量的同时,其几何忠实度优于基于训练和优化的基线方法。最后,我们开发了交互式用户界面,支持在线编辑超二次曲面并直接转换为带纹理的三维资产,为创意工作流提供实用化部署方案。项目页面详见:https://spacecontrol3d.github.io/
English
Generative methods for 3D assets have recently achieved remarkable progress, yet providing intuitive and precise control over the object geometry remains a key challenge. Existing approaches predominantly rely on text or image prompts, which often fall short in geometric specificity: language can be ambiguous, and images are cumbersome to edit. In this work, we introduce SpaceControl, a training-free test-time method for explicit spatial control of 3D generation. Our approach accepts a wide range of geometric inputs, from coarse primitives to detailed meshes, and integrates seamlessly with modern pre-trained generative models without requiring any additional training. A controllable parameter lets users trade off between geometric fidelity and output realism. Extensive quantitative evaluation and user studies demonstrate that SpaceControl outperforms both training-based and optimization-based baselines in geometric faithfulness while preserving high visual quality. Finally, we present an interactive user interface that enables online editing of superquadrics for direct conversion into textured 3D assets, facilitating practical deployment in creative workflows. Find our project page at https://spacecontrol3d.github.io/