ChatPaper.aiChatPaper

LaS-Comp:基于隐空间一致性的零样本三维补全

LaS-Comp: Zero-shot 3D Completion with Latent-Spatial Consistency

February 21, 2026
作者: Weilong Yan, Haipeng Li, Hao Xu, Nianjin Ye, Yihao Ai, Shuaicheng Liu, Jingyu Hu
cs.AI

摘要

本文提出LaS-Comp——一种零样本且类别无关的三维形状补全方法,该方法通过利用三维基础模型丰富的几何先验知识,实现对各类局部观测数据的形状补全。我们的贡献主要体现在三方面:首先,通过互补的双阶段设计利用生成先验进行补全:(i)显式替换阶段保留局部观测几何特征以确保补全结果的忠实度;(ii)隐式优化阶段确保观测区域与合成区域实现无缝边界衔接。其次,本框架无需训练即可适配不同三维基础模型。第三,我们构建了Omni-Comp综合基准数据集,融合真实场景与合成数据并包含多样化挑战性局部模式,可实现更全面真实的性能评估。定量与定性实验均表明,本方法优于现有最优方法。相关代码与数据详见https://github.com/DavidYan2001/LaS-Comp{LaS-Comp}。
English
This paper introduces LaS-Comp, a zero-shot and category-agnostic approach that leverages the rich geometric priors of 3D foundation models to enable 3D shape completion across diverse types of partial observations. Our contributions are threefold: First, harnesses these powerful generative priors for completion through a complementary two-stage design: (i) an explicit replacement stage that preserves the partial observation geometry to ensure faithful completion; and (ii) an implicit refinement stage ensures seamless boundaries between the observed and synthesized regions. Second, our framework is training-free and compatible with different 3D foundation models. Third, we introduce Omni-Comp, a comprehensive benchmark combining real-world and synthetic data with diverse and challenging partial patterns, enabling a more thorough and realistic evaluation. Both quantitative and qualitative experiments demonstrate that our approach outperforms previous state-of-the-art approaches. Our code and data will be available at https://github.com/DavidYan2001/LaS-Comp{LaS-Comp}.
PDF12March 28, 2026