LaS-Comp:基于隐空间一致性的零样本三维补全
LaS-Comp: Zero-shot 3D Completion with Latent-Spatial Consistency
February 21, 2026
作者: Weilong Yan, Haipeng Li, Hao Xu, Nianjin Ye, Yihao Ai, Shuaicheng Liu, Jingyu Hu
cs.AI
摘要
本文提出LaS-Comp——一种零样本且类别无关的三维形状补全方法,该方法利用三维基础模型丰富的几何先验,实现对各类局部观测数据的形状补全。我们的贡献包括三方面:首先,通过互补的双阶段设计利用生成先验进行补全:(i)显式替换阶段保留局部观测几何特征以确保补全真实性;(ii)隐式细化阶段确保观测区域与合成区域实现无缝边界衔接。其次,本框架无需训练且兼容不同三维基础模型。第三,我们构建了Omni-Comp综合基准,整合了具有多样化挑战性局部结构的真实场景与合成数据,支持更全面、更贴近实际的评估。定量与定性实验均表明,本方法优于现有最优方法。代码与数据详见https://github.com/DavidYan2001/LaS-Comp。
English
This paper introduces LaS-Comp, a zero-shot and category-agnostic approach that leverages the rich geometric priors of 3D foundation models to enable 3D shape completion across diverse types of partial observations. Our contributions are threefold: First, harnesses these powerful generative priors for completion through a complementary two-stage design: (i) an explicit replacement stage that preserves the partial observation geometry to ensure faithful completion; and (ii) an implicit refinement stage ensures seamless boundaries between the observed and synthesized regions. Second, our framework is training-free and compatible with different 3D foundation models. Third, we introduce Omni-Comp, a comprehensive benchmark combining real-world and synthetic data with diverse and challenging partial patterns, enabling a more thorough and realistic evaluation. Both quantitative and qualitative experiments demonstrate that our approach outperforms previous state-of-the-art approaches. Our code and data will be available at https://github.com/DavidYan2001/LaS-Comp{LaS-Comp}.