SpaceControl:为3D生成模型引入测试时空间控制技术
SpaceControl: Introducing Test-Time Spatial Control to 3D Generative Modeling
December 5, 2025
作者: Elisabetta Fedele, Francis Engelmann, Ian Huang, Or Litany, Marc Pollefeys, Leonidas Guibas
cs.AI
摘要
三维资产的生成方法近期取得了显著进展,但如何实现直观精确的几何控制仍是关键挑战。现有方法主要依赖文本或图像提示,但往往缺乏几何特异性:语言描述存在模糊性,而图像编辑又较为繁琐。本研究提出SpaceControl——一种免训练、支持测试阶段显式空间控制的三维生成方法。该框架能兼容从粗糙几何基元到精细网格的多样化几何输入,无需额外训练即可与现代化预训练生成模型无缝集成。通过可控参数,用户可在几何保真度与输出真实感之间灵活权衡。大量定量评估与用户研究表明,SpaceControl在保持高视觉质量的同时,其几何忠实度优于基于训练和基于优化的基线方法。最后,我们开发了支持超二次曲面实时编辑的交互式界面,可直接转换为带纹理的三维资产,为创意工作流提供实用化部署方案。项目页面详见:https://spacecontrol3d.github.io/
English
Generative methods for 3D assets have recently achieved remarkable progress, yet providing intuitive and precise control over the object geometry remains a key challenge. Existing approaches predominantly rely on text or image prompts, which often fall short in geometric specificity: language can be ambiguous, and images are cumbersome to edit. In this work, we introduce SpaceControl, a training-free test-time method for explicit spatial control of 3D generation. Our approach accepts a wide range of geometric inputs, from coarse primitives to detailed meshes, and integrates seamlessly with modern pre-trained generative models without requiring any additional training. A controllable parameter lets users trade off between geometric fidelity and output realism. Extensive quantitative evaluation and user studies demonstrate that SpaceControl outperforms both training-based and optimization-based baselines in geometric faithfulness while preserving high visual quality. Finally, we present an interactive user interface that enables online editing of superquadrics for direct conversion into textured 3D assets, facilitating practical deployment in creative workflows. Find our project page at https://spacecontrol3d.github.io/