ShapeR:基于随意捕捉的鲁棒条件化三维形状生成
ShapeR: Robust Conditional 3D Shape Generation from Casual Captures
January 16, 2026
作者: Yawar Siddiqui, Duncan Frost, Samir Aroudj, Armen Avetisyan, Henry Howard-Jenkins, Daniel DeTone, Pierre Moulon, Qirui Wu, Zhengqin Li, Julian Straub, Richard Newcombe, Jakob Engel
cs.AI
摘要
三维形状生成技术近期取得了显著进展,但现有方法大多依赖干净、无遮挡且分割良好的输入数据,而现实场景很少能满足这些条件。我们提出ShapeR这一创新方法,可从随意拍摄的图像序列中生成条件化三维物体形状。该方法通过现成的视觉-惯性SLAM系统、三维检测算法和视觉语言模型,为每个物体提取稀疏SLAM点云、多视角位姿图像及机器生成描述。我们采用经修正的流式变换器,通过有效融合这些模态数据来生成高精度三维度量形状。为应对随意拍摄数据带来的挑战,我们运用了动态组合增强、涵盖物体与场景级数据的课程训练方案,以及背景杂波处理策略。此外,我们构建了包含7个真实场景178个野外物体的新评估基准集,并配有几何标注。实验表明,在此挑战性设定下,ShapeR显著优于现有方法,其倒角距离指标较当前最优技术提升2.7倍。
English
Recent advances in 3D shape generation have achieved impressive results, but most existing methods rely on clean, unoccluded, and well-segmented inputs. Such conditions are rarely met in real-world scenarios. We present ShapeR, a novel approach for conditional 3D object shape generation from casually captured sequences. Given an image sequence, we leverage off-the-shelf visual-inertial SLAM, 3D detection algorithms, and vision-language models to extract, for each object, a set of sparse SLAM points, posed multi-view images, and machine-generated captions. A rectified flow transformer trained to effectively condition on these modalities then generates high-fidelity metric 3D shapes. To ensure robustness to the challenges of casually captured data, we employ a range of techniques including on-the-fly compositional augmentations, a curriculum training scheme spanning object- and scene-level datasets, and strategies to handle background clutter. Additionally, we introduce a new evaluation benchmark comprising 178 in-the-wild objects across 7 real-world scenes with geometry annotations. Experiments show that ShapeR significantly outperforms existing approaches in this challenging setting, achieving an improvement of 2.7x in Chamfer distance compared to state of the art.